00:00:00.000 Started by upstream project "autotest-per-patch" build number 132844 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.230 > git --version # 'git version 2.39.2' 00:00:00.230 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.329 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.341 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.353 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.354 > git config core.sparsecheckout # timeout=10 00:00:05.365 > git read-tree -mu HEAD # timeout=10 00:00:05.381 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.400 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.400 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.498 [Pipeline] Start of Pipeline 00:00:05.512 [Pipeline] library 00:00:05.516 Loading library shm_lib@master 00:00:05.517 Library shm_lib@master is cached. Copying from home. 00:00:05.542 [Pipeline] node 00:00:05.551 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.552 [Pipeline] { 00:00:05.560 [Pipeline] catchError 00:00:05.561 [Pipeline] { 00:00:05.569 [Pipeline] wrap 00:00:05.575 [Pipeline] { 00:00:05.580 [Pipeline] stage 00:00:05.581 [Pipeline] { (Prologue) 00:00:05.592 [Pipeline] echo 00:00:05.594 Node: VM-host-WFP1 00:00:05.598 [Pipeline] cleanWs 00:00:05.606 [WS-CLEANUP] Deleting project workspace... 00:00:05.606 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.611 [WS-CLEANUP] done 00:00:05.809 [Pipeline] setCustomBuildProperty 00:00:05.899 [Pipeline] httpRequest 00:00:06.188 [Pipeline] echo 00:00:06.191 Sorcerer 10.211.164.20 is alive 00:00:06.198 [Pipeline] retry 00:00:06.199 [Pipeline] { 00:00:06.210 [Pipeline] httpRequest 00:00:06.214 HttpMethod: GET 00:00:06.215 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.215 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.217 Response Code: HTTP/1.1 200 OK 00:00:06.218 Success: Status code 200 is in the accepted range: 200,404 00:00:06.218 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.995 [Pipeline] } 00:00:07.006 [Pipeline] // retry 00:00:07.010 [Pipeline] sh 00:00:07.286 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.301 [Pipeline] httpRequest 00:00:07.638 [Pipeline] echo 00:00:07.640 Sorcerer 10.211.164.20 is alive 00:00:07.648 [Pipeline] retry 00:00:07.650 [Pipeline] { 00:00:07.664 [Pipeline] httpRequest 00:00:07.668 HttpMethod: GET 00:00:07.669 URL: http://10.211.164.20/packages/spdk_bcaf208e3abaa0558667d2e29b7b35fe64bde654.tar.gz 00:00:07.669 Sending request to url: http://10.211.164.20/packages/spdk_bcaf208e3abaa0558667d2e29b7b35fe64bde654.tar.gz 00:00:07.671 Response Code: HTTP/1.1 200 OK 00:00:07.672 Success: Status code 200 is in the accepted range: 200,404 00:00:07.672 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_bcaf208e3abaa0558667d2e29b7b35fe64bde654.tar.gz 00:00:27.891 [Pipeline] } 00:00:27.909 [Pipeline] // retry 00:00:27.918 [Pipeline] sh 00:00:28.199 + tar --no-same-owner -xf spdk_bcaf208e3abaa0558667d2e29b7b35fe64bde654.tar.gz 00:00:30.746 [Pipeline] sh 00:00:31.029 + git -C spdk log --oneline -n5 00:00:31.029 bcaf208e3 [TEST] 00:00:31.029 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:31.029 66289a6db build: use VERSION file for storing version 00:00:31.029 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:31.029 cec5ba284 nvme/rdma: Register UMR per IO request 00:00:31.051 [Pipeline] writeFile 00:00:31.067 [Pipeline] sh 00:00:31.366 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.378 [Pipeline] sh 00:00:31.662 + cat autorun-spdk.conf 00:00:31.662 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.662 SPDK_TEST_NVME=1 00:00:31.662 SPDK_TEST_FTL=1 00:00:31.662 SPDK_TEST_ISAL=1 00:00:31.662 SPDK_RUN_ASAN=1 00:00:31.662 SPDK_RUN_UBSAN=1 00:00:31.662 SPDK_TEST_XNVME=1 00:00:31.662 SPDK_TEST_NVME_FDP=1 00:00:31.662 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.669 RUN_NIGHTLY=0 00:00:31.671 [Pipeline] } 00:00:31.686 [Pipeline] // stage 00:00:31.705 [Pipeline] stage 00:00:31.707 [Pipeline] { (Run VM) 00:00:31.720 [Pipeline] sh 00:00:32.003 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.003 + echo 'Start stage prepare_nvme.sh' 00:00:32.003 Start stage prepare_nvme.sh 00:00:32.003 + [[ -n 6 ]] 00:00:32.003 + disk_prefix=ex6 00:00:32.003 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:32.003 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:32.003 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:32.003 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.003 ++ SPDK_TEST_NVME=1 00:00:32.003 ++ SPDK_TEST_FTL=1 00:00:32.003 ++ SPDK_TEST_ISAL=1 00:00:32.003 ++ SPDK_RUN_ASAN=1 00:00:32.003 ++ SPDK_RUN_UBSAN=1 00:00:32.003 ++ SPDK_TEST_XNVME=1 00:00:32.003 ++ SPDK_TEST_NVME_FDP=1 00:00:32.003 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.003 ++ RUN_NIGHTLY=0 00:00:32.003 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:32.003 + nvme_files=() 00:00:32.003 + declare -A nvme_files 00:00:32.003 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.003 + nvme_files['nvme.img']=5G 00:00:32.003 + nvme_files['nvme-cmb.img']=5G 00:00:32.003 + nvme_files['nvme-multi0.img']=4G 00:00:32.003 + nvme_files['nvme-multi1.img']=4G 00:00:32.003 + nvme_files['nvme-multi2.img']=4G 00:00:32.003 + nvme_files['nvme-openstack.img']=8G 00:00:32.003 + nvme_files['nvme-zns.img']=5G 00:00:32.003 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.003 + (( SPDK_TEST_FTL == 1 )) 00:00:32.003 + nvme_files["nvme-ftl.img"]=6G 00:00:32.003 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.004 + nvme_files["nvme-fdp.img"]=1G 00:00:32.004 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.004 + for nvme in "${!nvme_files[@]}" 00:00:32.004 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:32.004 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.004 + for nvme in "${!nvme_files[@]}" 00:00:32.004 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:00:32.004 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:32.004 + for nvme in "${!nvme_files[@]}" 00:00:32.004 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:32.004 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.004 + for nvme in "${!nvme_files[@]}" 00:00:32.004 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:32.263 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.263 + for nvme in "${!nvme_files[@]}" 00:00:32.263 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:32.263 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.263 + for nvme in "${!nvme_files[@]}" 00:00:32.263 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:32.263 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.263 + for nvme in "${!nvme_files[@]}" 00:00:32.263 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:32.263 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.263 + for nvme in "${!nvme_files[@]}" 00:00:32.263 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:00:32.522 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:32.523 + for nvme in "${!nvme_files[@]}" 00:00:32.523 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:32.523 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.523 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:32.523 + echo 'End stage prepare_nvme.sh' 00:00:32.523 End stage prepare_nvme.sh 00:00:32.535 [Pipeline] sh 00:00:32.819 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.819 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:32.819 00:00:32.819 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:32.819 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:32.819 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:32.819 HELP=0 00:00:32.819 DRY_RUN=0 00:00:32.819 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:00:32.819 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:32.819 NVME_AUTO_CREATE=0 00:00:32.819 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:00:32.819 NVME_CMB=,,,, 00:00:32.819 NVME_PMR=,,,, 00:00:32.819 NVME_ZNS=,,,, 00:00:32.819 NVME_MS=true,,,, 00:00:32.819 NVME_FDP=,,,on, 00:00:32.819 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.819 SPDK_VAGRANT_VMCPU=10 00:00:32.819 SPDK_VAGRANT_VMRAM=12288 00:00:32.819 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.819 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.819 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.819 SPDK_OPENSTACK_NETWORK=0 00:00:32.819 VAGRANT_PACKAGE_BOX=0 00:00:32.819 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:32.819 FORCE_DISTRO=true 00:00:32.819 VAGRANT_BOX_VERSION= 00:00:32.819 EXTRA_VAGRANTFILES= 00:00:32.819 NIC_MODEL=e1000 00:00:32.819 00:00:32.819 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:32.819 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:35.354 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.292 ==> default: Creating image (snapshot of base box volume). 00:00:36.590 ==> default: Creating domain with the following settings... 00:00:36.590 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733921907_81169e08ac365676d5d5 00:00:36.590 ==> default: -- Domain type: kvm 00:00:36.590 ==> default: -- Cpus: 10 00:00:36.590 ==> default: -- Feature: acpi 00:00:36.590 ==> default: -- Feature: apic 00:00:36.590 ==> default: -- Feature: pae 00:00:36.590 ==> default: -- Memory: 12288M 00:00:36.590 ==> default: -- Memory Backing: hugepages: 00:00:36.590 ==> default: -- Management MAC: 00:00:36.590 ==> default: -- Loader: 00:00:36.590 ==> default: -- Nvram: 00:00:36.590 ==> default: -- Base box: spdk/fedora39 00:00:36.590 ==> default: -- Storage pool: default 00:00:36.590 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733921907_81169e08ac365676d5d5.img (20G) 00:00:36.590 ==> default: -- Volume Cache: default 00:00:36.590 ==> default: -- Kernel: 00:00:36.590 ==> default: -- Initrd: 00:00:36.590 ==> default: -- Graphics Type: vnc 00:00:36.590 ==> default: -- Graphics Port: -1 00:00:36.590 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.590 ==> default: -- Graphics Password: Not defined 00:00:36.590 ==> default: -- Video Type: cirrus 00:00:36.590 ==> default: -- Video VRAM: 9216 00:00:36.590 ==> default: -- Sound Type: 00:00:36.590 ==> default: -- Keymap: en-us 00:00:36.590 ==> default: -- TPM Path: 00:00:36.590 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.590 ==> default: -- Command line args: 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.590 ==> default: -> value=-drive, 00:00:36.590 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.590 ==> default: -> value=-drive, 00:00:36.590 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:36.590 ==> default: -> value=-drive, 00:00:36.590 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.590 ==> default: -> value=-drive, 00:00:36.590 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.590 ==> default: -> value=-drive, 00:00:36.590 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:36.590 ==> default: -> value=-drive, 00:00:36.590 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:36.590 ==> default: -> value=-device, 00:00:36.590 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.850 ==> default: Creating shared folders metadata... 00:00:36.850 ==> default: Starting domain. 00:00:39.389 ==> default: Waiting for domain to get an IP address... 00:00:54.294 ==> default: Waiting for SSH to become available... 00:00:55.676 ==> default: Configuring and enabling network interfaces... 00:01:00.953 default: SSH address: 192.168.121.167:22 00:01:00.953 default: SSH username: vagrant 00:01:00.953 default: SSH auth method: private key 00:01:04.247 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:14.257 ==> default: Mounting SSHFS shared folder... 00:01:15.197 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:15.197 ==> default: Checking Mount.. 00:01:17.103 ==> default: Folder Successfully Mounted! 00:01:17.103 ==> default: Running provisioner: file... 00:01:18.040 default: ~/.gitconfig => .gitconfig 00:01:18.609 00:01:18.609 SUCCESS! 00:01:18.609 00:01:18.609 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:18.609 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:18.609 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:18.609 00:01:18.617 [Pipeline] } 00:01:18.633 [Pipeline] // stage 00:01:18.641 [Pipeline] dir 00:01:18.642 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:18.644 [Pipeline] { 00:01:18.656 [Pipeline] catchError 00:01:18.658 [Pipeline] { 00:01:18.671 [Pipeline] sh 00:01:18.954 + vagrant ssh-config --host vagrant 00:01:18.955 + sed -ne /^Host/,$p 00:01:18.955 + tee ssh_conf 00:01:21.490 Host vagrant 00:01:21.490 HostName 192.168.121.167 00:01:21.490 User vagrant 00:01:21.490 Port 22 00:01:21.490 UserKnownHostsFile /dev/null 00:01:21.490 StrictHostKeyChecking no 00:01:21.490 PasswordAuthentication no 00:01:21.490 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:21.490 IdentitiesOnly yes 00:01:21.490 LogLevel FATAL 00:01:21.490 ForwardAgent yes 00:01:21.490 ForwardX11 yes 00:01:21.490 00:01:21.504 [Pipeline] withEnv 00:01:21.506 [Pipeline] { 00:01:21.519 [Pipeline] sh 00:01:21.802 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:21.802 source /etc/os-release 00:01:21.802 [[ -e /image.version ]] && img=$(< /image.version) 00:01:21.802 # Minimal, systemd-like check. 00:01:21.802 if [[ -e /.dockerenv ]]; then 00:01:21.802 # Clear garbage from the node's name: 00:01:21.802 # agt-er_autotest_547-896 -> autotest_547-896 00:01:21.802 # $HOSTNAME is the actual container id 00:01:21.802 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:21.802 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:21.802 # We can assume this is a mount from a host where container is running, 00:01:21.802 # so fetch its hostname to easily identify the target swarm worker. 00:01:21.802 container="$(< /etc/hostname) ($agent)" 00:01:21.802 else 00:01:21.802 # Fallback 00:01:21.802 container=$agent 00:01:21.802 fi 00:01:21.802 fi 00:01:21.802 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:21.802 00:01:22.131 [Pipeline] } 00:01:22.146 [Pipeline] // withEnv 00:01:22.153 [Pipeline] setCustomBuildProperty 00:01:22.166 [Pipeline] stage 00:01:22.168 [Pipeline] { (Tests) 00:01:22.182 [Pipeline] sh 00:01:22.464 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:22.736 [Pipeline] sh 00:01:23.018 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.293 [Pipeline] timeout 00:01:23.294 Timeout set to expire in 50 min 00:01:23.296 [Pipeline] { 00:01:23.310 [Pipeline] sh 00:01:23.593 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:24.161 HEAD is now at bcaf208e3 [TEST] 00:01:24.174 [Pipeline] sh 00:01:24.457 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:24.731 [Pipeline] sh 00:01:25.013 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.289 [Pipeline] sh 00:01:25.574 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:25.834 ++ readlink -f spdk_repo 00:01:25.834 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:25.834 + [[ -n /home/vagrant/spdk_repo ]] 00:01:25.834 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:25.834 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:25.834 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:25.834 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:25.834 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:25.834 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:25.834 + cd /home/vagrant/spdk_repo 00:01:25.834 + source /etc/os-release 00:01:25.834 ++ NAME='Fedora Linux' 00:01:25.834 ++ VERSION='39 (Cloud Edition)' 00:01:25.834 ++ ID=fedora 00:01:25.834 ++ VERSION_ID=39 00:01:25.834 ++ VERSION_CODENAME= 00:01:25.834 ++ PLATFORM_ID=platform:f39 00:01:25.834 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.834 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.834 ++ LOGO=fedora-logo-icon 00:01:25.834 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.834 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.834 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.834 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.834 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.834 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.834 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.834 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.834 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.834 ++ SUPPORT_END=2024-11-12 00:01:25.834 ++ VARIANT='Cloud Edition' 00:01:25.834 ++ VARIANT_ID=cloud 00:01:25.834 + uname -a 00:01:25.834 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.834 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.664 Hugepages 00:01:26.664 node hugesize free / total 00:01:26.664 node0 1048576kB 0 / 0 00:01:26.664 node0 2048kB 0 / 0 00:01:26.664 00:01:26.664 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.664 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.664 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.923 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:26.923 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:26.923 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:26.923 + rm -f /tmp/spdk-ld-path 00:01:26.923 + source autorun-spdk.conf 00:01:26.923 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.923 ++ SPDK_TEST_NVME=1 00:01:26.923 ++ SPDK_TEST_FTL=1 00:01:26.923 ++ SPDK_TEST_ISAL=1 00:01:26.923 ++ SPDK_RUN_ASAN=1 00:01:26.923 ++ SPDK_RUN_UBSAN=1 00:01:26.923 ++ SPDK_TEST_XNVME=1 00:01:26.923 ++ SPDK_TEST_NVME_FDP=1 00:01:26.923 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.923 ++ RUN_NIGHTLY=0 00:01:26.923 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.923 + [[ -n '' ]] 00:01:26.923 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.923 + for M in /var/spdk/build-*-manifest.txt 00:01:26.923 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.923 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.923 + for M in /var/spdk/build-*-manifest.txt 00:01:26.923 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.923 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.923 + for M in /var/spdk/build-*-manifest.txt 00:01:26.923 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.923 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.923 ++ uname 00:01:26.923 + [[ Linux == \L\i\n\u\x ]] 00:01:26.923 + sudo dmesg -T 00:01:26.923 + sudo dmesg --clear 00:01:27.183 + dmesg_pid=5247 00:01:27.183 + sudo dmesg -Tw 00:01:27.183 + [[ Fedora Linux == FreeBSD ]] 00:01:27.183 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.183 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.183 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.183 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.183 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.183 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.183 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.183 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.183 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.183 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.183 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.183 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.183 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.183 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.183 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.183 12:59:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:27.183 12:59:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.183 12:59:18 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:27.183 12:59:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:27.183 12:59:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.183 12:59:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:27.183 12:59:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.183 12:59:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.183 12:59:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.183 12:59:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.183 12:59:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.184 12:59:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.184 12:59:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.184 12:59:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.184 12:59:18 -- paths/export.sh@5 -- $ export PATH 00:01:27.184 12:59:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.184 12:59:18 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.184 12:59:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:27.184 12:59:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733921958.XXXXXX 00:01:27.184 12:59:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733921958.sJqB9h 00:01:27.184 12:59:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:27.184 12:59:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:27.184 12:59:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.184 12:59:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.184 12:59:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.184 12:59:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:27.184 12:59:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.184 12:59:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.444 12:59:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:27.444 12:59:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:27.444 12:59:18 -- pm/common@17 -- $ local monitor 00:01:27.444 12:59:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.444 12:59:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.444 12:59:18 -- pm/common@25 -- $ sleep 1 00:01:27.444 12:59:18 -- pm/common@21 -- $ date +%s 00:01:27.444 12:59:18 -- pm/common@21 -- $ date +%s 00:01:27.444 12:59:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733921958 00:01:27.444 12:59:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733921958 00:01:27.444 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733921958_collect-cpu-load.pm.log 00:01:27.444 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733921958_collect-vmstat.pm.log 00:01:28.382 12:59:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:28.383 12:59:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.383 12:59:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.383 12:59:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:28.383 12:59:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.383 Wed Dec 11 12:59:19 PM UTC 2024 00:01:28.383 12:59:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.383 v25.01-rc1-1-gbcaf208e3 00:01:28.383 12:59:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:28.383 12:59:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:28.383 12:59:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.383 12:59:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.383 12:59:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.383 ************************************ 00:01:28.383 START TEST asan 00:01:28.383 ************************************ 00:01:28.383 using asan 00:01:28.383 12:59:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:28.383 00:01:28.383 real 0m0.001s 00:01:28.383 user 0m0.000s 00:01:28.383 sys 0m0.000s 00:01:28.383 12:59:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.383 ************************************ 00:01:28.383 END TEST asan 00:01:28.383 12:59:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.383 ************************************ 00:01:28.383 12:59:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.383 12:59:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.383 12:59:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.383 12:59:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.383 12:59:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.383 ************************************ 00:01:28.383 START TEST ubsan 00:01:28.383 ************************************ 00:01:28.383 using ubsan 00:01:28.383 12:59:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:28.383 00:01:28.383 real 0m0.000s 00:01:28.383 user 0m0.000s 00:01:28.383 sys 0m0.000s 00:01:28.383 12:59:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.383 ************************************ 00:01:28.383 12:59:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.383 END TEST ubsan 00:01:28.383 ************************************ 00:01:28.641 12:59:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.641 12:59:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.641 12:59:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.641 12:59:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.641 12:59:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.641 12:59:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.641 12:59:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.641 12:59:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.641 12:59:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:28.641 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:28.641 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:29.266 Using 'verbs' RDMA provider 00:01:48.325 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:03.213 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:03.213 Creating mk/config.mk...done. 00:02:03.213 Creating mk/cc.flags.mk...done. 00:02:03.213 Type 'make' to build. 00:02:03.213 12:59:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:03.214 12:59:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:03.214 12:59:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:03.214 12:59:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.214 ************************************ 00:02:03.214 START TEST make 00:02:03.214 ************************************ 00:02:03.214 12:59:54 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:03.473 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:03.473 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:03.473 meson setup builddir \ 00:02:03.473 -Dwith-libaio=enabled \ 00:02:03.473 -Dwith-liburing=enabled \ 00:02:03.473 -Dwith-libvfn=disabled \ 00:02:03.473 -Dwith-spdk=disabled \ 00:02:03.473 -Dexamples=false \ 00:02:03.473 -Dtests=false \ 00:02:03.473 -Dtools=false && \ 00:02:03.473 meson compile -C builddir && \ 00:02:03.473 cd -) 00:02:06.012 The Meson build system 00:02:06.012 Version: 1.5.0 00:02:06.012 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:06.012 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:06.012 Build type: native build 00:02:06.012 Project name: xnvme 00:02:06.012 Project version: 0.7.5 00:02:06.012 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.012 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.012 Host machine cpu family: x86_64 00:02:06.012 Host machine cpu: x86_64 00:02:06.012 Message: host_machine.system: linux 00:02:06.012 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:06.012 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:06.012 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:06.012 Run-time dependency threads found: YES 00:02:06.012 Has header "setupapi.h" : NO 00:02:06.012 Has header "linux/blkzoned.h" : YES 00:02:06.012 Has header "linux/blkzoned.h" : YES (cached) 00:02:06.012 Has header "libaio.h" : YES 00:02:06.012 Library aio found: YES 00:02:06.012 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.012 Run-time dependency liburing found: YES 2.2 00:02:06.012 Dependency libvfn skipped: feature with-libvfn disabled 00:02:06.012 Found CMake: /usr/bin/cmake (3.27.7) 00:02:06.012 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:06.012 Subproject spdk : skipped: feature with-spdk disabled 00:02:06.012 Run-time dependency appleframeworks found: NO (tried framework) 00:02:06.012 Run-time dependency appleframeworks found: NO (tried framework) 00:02:06.012 Library rt found: YES 00:02:06.012 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:06.012 Configuring xnvme_config.h using configuration 00:02:06.012 Configuring xnvme.spec using configuration 00:02:06.012 Run-time dependency bash-completion found: YES 2.11 00:02:06.012 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:06.012 Program cp found: YES (/usr/bin/cp) 00:02:06.012 Build targets in project: 3 00:02:06.012 00:02:06.012 xnvme 0.7.5 00:02:06.012 00:02:06.012 Subprojects 00:02:06.012 spdk : NO Feature 'with-spdk' disabled 00:02:06.012 00:02:06.012 User defined options 00:02:06.012 examples : false 00:02:06.012 tests : false 00:02:06.012 tools : false 00:02:06.012 with-libaio : enabled 00:02:06.012 with-liburing: enabled 00:02:06.012 with-libvfn : disabled 00:02:06.012 with-spdk : disabled 00:02:06.012 00:02:06.012 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.012 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:06.012 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:06.012 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:06.012 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:06.012 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:06.012 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:06.012 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:06.012 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:06.012 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:06.012 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:06.012 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:06.012 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:06.012 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:06.270 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:06.270 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:06.270 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:06.270 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:06.270 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:06.270 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:06.270 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:06.270 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:06.270 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:06.270 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:06.270 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:06.270 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:06.270 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:06.270 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:06.270 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:06.270 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:06.270 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:06.270 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:06.270 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:06.270 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:06.270 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:06.271 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:06.271 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:06.271 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:06.271 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:06.271 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:06.271 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:06.271 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:06.271 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:06.271 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:06.271 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:06.271 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:06.271 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:06.271 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:06.529 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:06.529 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:06.529 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:06.529 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:06.529 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:06.529 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:06.529 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:06.529 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:06.529 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:06.529 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:06.529 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:06.529 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:06.529 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:06.529 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:06.529 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:06.529 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:06.529 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:06.529 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:06.529 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:06.529 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:06.529 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:06.530 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:06.787 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:06.787 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:06.787 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:06.787 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:06.787 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:07.046 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:07.046 [75/76] Linking static target lib/libxnvme.a 00:02:07.046 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:07.046 INFO: autodetecting backend as ninja 00:02:07.046 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:07.046 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:15.161 The Meson build system 00:02:15.161 Version: 1.5.0 00:02:15.161 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:15.161 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:15.161 Build type: native build 00:02:15.161 Program cat found: YES (/usr/bin/cat) 00:02:15.161 Project name: DPDK 00:02:15.161 Project version: 24.03.0 00:02:15.161 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.161 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.161 Host machine cpu family: x86_64 00:02:15.161 Host machine cpu: x86_64 00:02:15.161 Message: ## Building in Developer Mode ## 00:02:15.161 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.161 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.161 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.161 Program python3 found: YES (/usr/bin/python3) 00:02:15.161 Program cat found: YES (/usr/bin/cat) 00:02:15.161 Compiler for C supports arguments -march=native: YES 00:02:15.161 Checking for size of "void *" : 8 00:02:15.161 Checking for size of "void *" : 8 (cached) 00:02:15.161 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:15.161 Library m found: YES 00:02:15.161 Library numa found: YES 00:02:15.161 Has header "numaif.h" : YES 00:02:15.161 Library fdt found: NO 00:02:15.161 Library execinfo found: NO 00:02:15.161 Has header "execinfo.h" : YES 00:02:15.161 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.161 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.161 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.161 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.161 Run-time dependency openssl found: YES 3.1.1 00:02:15.161 Run-time dependency libpcap found: YES 1.10.4 00:02:15.161 Has header "pcap.h" with dependency libpcap: YES 00:02:15.161 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.161 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.161 Compiler for C supports arguments -Wformat: YES 00:02:15.161 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.161 Compiler for C supports arguments -Wformat-security: NO 00:02:15.161 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.161 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.161 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.161 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.161 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.161 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.161 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.161 Compiler for C supports arguments -Wundef: YES 00:02:15.161 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.161 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.161 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.161 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.161 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.161 Program objdump found: YES (/usr/bin/objdump) 00:02:15.161 Compiler for C supports arguments -mavx512f: YES 00:02:15.161 Checking if "AVX512 checking" compiles: YES 00:02:15.161 Fetching value of define "__SSE4_2__" : 1 00:02:15.161 Fetching value of define "__AES__" : 1 00:02:15.161 Fetching value of define "__AVX__" : 1 00:02:15.161 Fetching value of define "__AVX2__" : 1 00:02:15.161 Fetching value of define "__AVX512BW__" : 1 00:02:15.161 Fetching value of define "__AVX512CD__" : 1 00:02:15.161 Fetching value of define "__AVX512DQ__" : 1 00:02:15.161 Fetching value of define "__AVX512F__" : 1 00:02:15.161 Fetching value of define "__AVX512VL__" : 1 00:02:15.161 Fetching value of define "__PCLMUL__" : 1 00:02:15.161 Fetching value of define "__RDRND__" : 1 00:02:15.161 Fetching value of define "__RDSEED__" : 1 00:02:15.161 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.161 Fetching value of define "__znver1__" : (undefined) 00:02:15.161 Fetching value of define "__znver2__" : (undefined) 00:02:15.161 Fetching value of define "__znver3__" : (undefined) 00:02:15.161 Fetching value of define "__znver4__" : (undefined) 00:02:15.161 Library asan found: YES 00:02:15.161 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.161 Message: lib/log: Defining dependency "log" 00:02:15.161 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.161 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.161 Library rt found: YES 00:02:15.161 Checking for function "getentropy" : NO 00:02:15.161 Message: lib/eal: Defining dependency "eal" 00:02:15.161 Message: lib/ring: Defining dependency "ring" 00:02:15.161 Message: lib/rcu: Defining dependency "rcu" 00:02:15.161 Message: lib/mempool: Defining dependency "mempool" 00:02:15.161 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.161 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.161 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.161 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.161 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:15.161 Compiler for C supports arguments -mpclmul: YES 00:02:15.161 Compiler for C supports arguments -maes: YES 00:02:15.161 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.161 Compiler for C supports arguments -mavx512bw: YES 00:02:15.161 Compiler for C supports arguments -mavx512dq: YES 00:02:15.161 Compiler for C supports arguments -mavx512vl: YES 00:02:15.161 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.161 Compiler for C supports arguments -mavx2: YES 00:02:15.161 Compiler for C supports arguments -mavx: YES 00:02:15.161 Message: lib/net: Defining dependency "net" 00:02:15.161 Message: lib/meter: Defining dependency "meter" 00:02:15.161 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.161 Message: lib/pci: Defining dependency "pci" 00:02:15.161 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.161 Message: lib/hash: Defining dependency "hash" 00:02:15.161 Message: lib/timer: Defining dependency "timer" 00:02:15.161 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.161 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.161 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.161 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.161 Message: lib/power: Defining dependency "power" 00:02:15.161 Message: lib/reorder: Defining dependency "reorder" 00:02:15.161 Message: lib/security: Defining dependency "security" 00:02:15.161 Has header "linux/userfaultfd.h" : YES 00:02:15.161 Has header "linux/vduse.h" : YES 00:02:15.161 Message: lib/vhost: Defining dependency "vhost" 00:02:15.161 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.161 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.161 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.161 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.161 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.161 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.161 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.161 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.161 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.161 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.161 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.161 Configuring doxy-api-html.conf using configuration 00:02:15.161 Configuring doxy-api-man.conf using configuration 00:02:15.161 Program mandb found: YES (/usr/bin/mandb) 00:02:15.161 Program sphinx-build found: NO 00:02:15.161 Configuring rte_build_config.h using configuration 00:02:15.161 Message: 00:02:15.161 ================= 00:02:15.161 Applications Enabled 00:02:15.161 ================= 00:02:15.161 00:02:15.161 apps: 00:02:15.161 00:02:15.161 00:02:15.161 Message: 00:02:15.161 ================= 00:02:15.161 Libraries Enabled 00:02:15.161 ================= 00:02:15.161 00:02:15.161 libs: 00:02:15.161 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.161 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.161 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.161 00:02:15.161 Message: 00:02:15.161 =============== 00:02:15.161 Drivers Enabled 00:02:15.161 =============== 00:02:15.161 00:02:15.161 common: 00:02:15.161 00:02:15.161 bus: 00:02:15.161 pci, vdev, 00:02:15.161 mempool: 00:02:15.161 ring, 00:02:15.161 dma: 00:02:15.161 00:02:15.161 net: 00:02:15.161 00:02:15.161 crypto: 00:02:15.161 00:02:15.161 compress: 00:02:15.161 00:02:15.161 vdpa: 00:02:15.161 00:02:15.161 00:02:15.161 Message: 00:02:15.161 ================= 00:02:15.161 Content Skipped 00:02:15.161 ================= 00:02:15.161 00:02:15.161 apps: 00:02:15.161 dumpcap: explicitly disabled via build config 00:02:15.162 graph: explicitly disabled via build config 00:02:15.162 pdump: explicitly disabled via build config 00:02:15.162 proc-info: explicitly disabled via build config 00:02:15.162 test-acl: explicitly disabled via build config 00:02:15.162 test-bbdev: explicitly disabled via build config 00:02:15.162 test-cmdline: explicitly disabled via build config 00:02:15.162 test-compress-perf: explicitly disabled via build config 00:02:15.162 test-crypto-perf: explicitly disabled via build config 00:02:15.162 test-dma-perf: explicitly disabled via build config 00:02:15.162 test-eventdev: explicitly disabled via build config 00:02:15.162 test-fib: explicitly disabled via build config 00:02:15.162 test-flow-perf: explicitly disabled via build config 00:02:15.162 test-gpudev: explicitly disabled via build config 00:02:15.162 test-mldev: explicitly disabled via build config 00:02:15.162 test-pipeline: explicitly disabled via build config 00:02:15.162 test-pmd: explicitly disabled via build config 00:02:15.162 test-regex: explicitly disabled via build config 00:02:15.162 test-sad: explicitly disabled via build config 00:02:15.162 test-security-perf: explicitly disabled via build config 00:02:15.162 00:02:15.162 libs: 00:02:15.162 argparse: explicitly disabled via build config 00:02:15.162 metrics: explicitly disabled via build config 00:02:15.162 acl: explicitly disabled via build config 00:02:15.162 bbdev: explicitly disabled via build config 00:02:15.162 bitratestats: explicitly disabled via build config 00:02:15.162 bpf: explicitly disabled via build config 00:02:15.162 cfgfile: explicitly disabled via build config 00:02:15.162 distributor: explicitly disabled via build config 00:02:15.162 efd: explicitly disabled via build config 00:02:15.162 eventdev: explicitly disabled via build config 00:02:15.162 dispatcher: explicitly disabled via build config 00:02:15.162 gpudev: explicitly disabled via build config 00:02:15.162 gro: explicitly disabled via build config 00:02:15.162 gso: explicitly disabled via build config 00:02:15.162 ip_frag: explicitly disabled via build config 00:02:15.162 jobstats: explicitly disabled via build config 00:02:15.162 latencystats: explicitly disabled via build config 00:02:15.162 lpm: explicitly disabled via build config 00:02:15.162 member: explicitly disabled via build config 00:02:15.162 pcapng: explicitly disabled via build config 00:02:15.162 rawdev: explicitly disabled via build config 00:02:15.162 regexdev: explicitly disabled via build config 00:02:15.162 mldev: explicitly disabled via build config 00:02:15.162 rib: explicitly disabled via build config 00:02:15.162 sched: explicitly disabled via build config 00:02:15.162 stack: explicitly disabled via build config 00:02:15.162 ipsec: explicitly disabled via build config 00:02:15.162 pdcp: explicitly disabled via build config 00:02:15.162 fib: explicitly disabled via build config 00:02:15.162 port: explicitly disabled via build config 00:02:15.162 pdump: explicitly disabled via build config 00:02:15.162 table: explicitly disabled via build config 00:02:15.162 pipeline: explicitly disabled via build config 00:02:15.162 graph: explicitly disabled via build config 00:02:15.162 node: explicitly disabled via build config 00:02:15.162 00:02:15.162 drivers: 00:02:15.162 common/cpt: not in enabled drivers build config 00:02:15.162 common/dpaax: not in enabled drivers build config 00:02:15.162 common/iavf: not in enabled drivers build config 00:02:15.162 common/idpf: not in enabled drivers build config 00:02:15.162 common/ionic: not in enabled drivers build config 00:02:15.162 common/mvep: not in enabled drivers build config 00:02:15.162 common/octeontx: not in enabled drivers build config 00:02:15.162 bus/auxiliary: not in enabled drivers build config 00:02:15.162 bus/cdx: not in enabled drivers build config 00:02:15.162 bus/dpaa: not in enabled drivers build config 00:02:15.162 bus/fslmc: not in enabled drivers build config 00:02:15.162 bus/ifpga: not in enabled drivers build config 00:02:15.162 bus/platform: not in enabled drivers build config 00:02:15.162 bus/uacce: not in enabled drivers build config 00:02:15.162 bus/vmbus: not in enabled drivers build config 00:02:15.162 common/cnxk: not in enabled drivers build config 00:02:15.162 common/mlx5: not in enabled drivers build config 00:02:15.162 common/nfp: not in enabled drivers build config 00:02:15.162 common/nitrox: not in enabled drivers build config 00:02:15.162 common/qat: not in enabled drivers build config 00:02:15.162 common/sfc_efx: not in enabled drivers build config 00:02:15.162 mempool/bucket: not in enabled drivers build config 00:02:15.162 mempool/cnxk: not in enabled drivers build config 00:02:15.162 mempool/dpaa: not in enabled drivers build config 00:02:15.162 mempool/dpaa2: not in enabled drivers build config 00:02:15.162 mempool/octeontx: not in enabled drivers build config 00:02:15.162 mempool/stack: not in enabled drivers build config 00:02:15.162 dma/cnxk: not in enabled drivers build config 00:02:15.162 dma/dpaa: not in enabled drivers build config 00:02:15.162 dma/dpaa2: not in enabled drivers build config 00:02:15.162 dma/hisilicon: not in enabled drivers build config 00:02:15.162 dma/idxd: not in enabled drivers build config 00:02:15.162 dma/ioat: not in enabled drivers build config 00:02:15.162 dma/skeleton: not in enabled drivers build config 00:02:15.162 net/af_packet: not in enabled drivers build config 00:02:15.162 net/af_xdp: not in enabled drivers build config 00:02:15.162 net/ark: not in enabled drivers build config 00:02:15.162 net/atlantic: not in enabled drivers build config 00:02:15.162 net/avp: not in enabled drivers build config 00:02:15.162 net/axgbe: not in enabled drivers build config 00:02:15.162 net/bnx2x: not in enabled drivers build config 00:02:15.162 net/bnxt: not in enabled drivers build config 00:02:15.162 net/bonding: not in enabled drivers build config 00:02:15.162 net/cnxk: not in enabled drivers build config 00:02:15.162 net/cpfl: not in enabled drivers build config 00:02:15.162 net/cxgbe: not in enabled drivers build config 00:02:15.162 net/dpaa: not in enabled drivers build config 00:02:15.162 net/dpaa2: not in enabled drivers build config 00:02:15.162 net/e1000: not in enabled drivers build config 00:02:15.162 net/ena: not in enabled drivers build config 00:02:15.162 net/enetc: not in enabled drivers build config 00:02:15.162 net/enetfec: not in enabled drivers build config 00:02:15.162 net/enic: not in enabled drivers build config 00:02:15.162 net/failsafe: not in enabled drivers build config 00:02:15.162 net/fm10k: not in enabled drivers build config 00:02:15.162 net/gve: not in enabled drivers build config 00:02:15.162 net/hinic: not in enabled drivers build config 00:02:15.162 net/hns3: not in enabled drivers build config 00:02:15.162 net/i40e: not in enabled drivers build config 00:02:15.162 net/iavf: not in enabled drivers build config 00:02:15.162 net/ice: not in enabled drivers build config 00:02:15.162 net/idpf: not in enabled drivers build config 00:02:15.162 net/igc: not in enabled drivers build config 00:02:15.162 net/ionic: not in enabled drivers build config 00:02:15.162 net/ipn3ke: not in enabled drivers build config 00:02:15.162 net/ixgbe: not in enabled drivers build config 00:02:15.162 net/mana: not in enabled drivers build config 00:02:15.162 net/memif: not in enabled drivers build config 00:02:15.162 net/mlx4: not in enabled drivers build config 00:02:15.162 net/mlx5: not in enabled drivers build config 00:02:15.162 net/mvneta: not in enabled drivers build config 00:02:15.162 net/mvpp2: not in enabled drivers build config 00:02:15.162 net/netvsc: not in enabled drivers build config 00:02:15.162 net/nfb: not in enabled drivers build config 00:02:15.162 net/nfp: not in enabled drivers build config 00:02:15.162 net/ngbe: not in enabled drivers build config 00:02:15.162 net/null: not in enabled drivers build config 00:02:15.162 net/octeontx: not in enabled drivers build config 00:02:15.162 net/octeon_ep: not in enabled drivers build config 00:02:15.162 net/pcap: not in enabled drivers build config 00:02:15.162 net/pfe: not in enabled drivers build config 00:02:15.162 net/qede: not in enabled drivers build config 00:02:15.162 net/ring: not in enabled drivers build config 00:02:15.162 net/sfc: not in enabled drivers build config 00:02:15.162 net/softnic: not in enabled drivers build config 00:02:15.162 net/tap: not in enabled drivers build config 00:02:15.162 net/thunderx: not in enabled drivers build config 00:02:15.162 net/txgbe: not in enabled drivers build config 00:02:15.162 net/vdev_netvsc: not in enabled drivers build config 00:02:15.162 net/vhost: not in enabled drivers build config 00:02:15.162 net/virtio: not in enabled drivers build config 00:02:15.162 net/vmxnet3: not in enabled drivers build config 00:02:15.162 raw/*: missing internal dependency, "rawdev" 00:02:15.162 crypto/armv8: not in enabled drivers build config 00:02:15.162 crypto/bcmfs: not in enabled drivers build config 00:02:15.162 crypto/caam_jr: not in enabled drivers build config 00:02:15.162 crypto/ccp: not in enabled drivers build config 00:02:15.162 crypto/cnxk: not in enabled drivers build config 00:02:15.162 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.162 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.162 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.162 crypto/mlx5: not in enabled drivers build config 00:02:15.162 crypto/mvsam: not in enabled drivers build config 00:02:15.162 crypto/nitrox: not in enabled drivers build config 00:02:15.162 crypto/null: not in enabled drivers build config 00:02:15.162 crypto/octeontx: not in enabled drivers build config 00:02:15.162 crypto/openssl: not in enabled drivers build config 00:02:15.162 crypto/scheduler: not in enabled drivers build config 00:02:15.162 crypto/uadk: not in enabled drivers build config 00:02:15.162 crypto/virtio: not in enabled drivers build config 00:02:15.162 compress/isal: not in enabled drivers build config 00:02:15.162 compress/mlx5: not in enabled drivers build config 00:02:15.162 compress/nitrox: not in enabled drivers build config 00:02:15.162 compress/octeontx: not in enabled drivers build config 00:02:15.162 compress/zlib: not in enabled drivers build config 00:02:15.162 regex/*: missing internal dependency, "regexdev" 00:02:15.162 ml/*: missing internal dependency, "mldev" 00:02:15.162 vdpa/ifc: not in enabled drivers build config 00:02:15.162 vdpa/mlx5: not in enabled drivers build config 00:02:15.162 vdpa/nfp: not in enabled drivers build config 00:02:15.162 vdpa/sfc: not in enabled drivers build config 00:02:15.162 event/*: missing internal dependency, "eventdev" 00:02:15.162 baseband/*: missing internal dependency, "bbdev" 00:02:15.162 gpu/*: missing internal dependency, "gpudev" 00:02:15.162 00:02:15.162 00:02:15.162 Build targets in project: 85 00:02:15.162 00:02:15.162 DPDK 24.03.0 00:02:15.162 00:02:15.162 User defined options 00:02:15.163 buildtype : debug 00:02:15.163 default_library : shared 00:02:15.163 libdir : lib 00:02:15.163 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:15.163 b_sanitize : address 00:02:15.163 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.163 c_link_args : 00:02:15.163 cpu_instruction_set: native 00:02:15.163 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:15.163 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:15.163 enable_docs : false 00:02:15.163 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:15.163 enable_kmods : false 00:02:15.163 max_lcores : 128 00:02:15.163 tests : false 00:02:15.163 00:02:15.163 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.163 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:15.163 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.163 [2/268] Linking static target lib/librte_kvargs.a 00:02:15.163 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.163 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.163 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.163 [6/268] Linking static target lib/librte_log.a 00:02:15.163 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.163 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.422 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.422 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.422 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.422 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.422 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.422 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.422 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.422 [16/268] Linking static target lib/librte_telemetry.a 00:02:15.422 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.422 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.681 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.940 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.940 [21/268] Linking target lib/librte_log.so.24.1 00:02:15.940 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.940 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.940 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.940 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.940 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.940 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.199 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.199 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.199 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.199 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.199 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.199 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.458 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:16.458 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.458 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.458 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.458 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.458 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.717 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.717 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.717 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.717 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.717 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.717 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.976 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.976 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.976 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.235 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.235 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.235 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.235 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.235 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.235 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.235 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.494 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:17.494 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:17.494 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:17.753 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:17.753 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:17.753 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.753 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.753 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.753 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.753 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.753 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.012 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.270 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.270 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.270 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.270 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.270 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.270 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.545 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.545 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.545 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.545 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.545 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.859 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.859 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.859 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.859 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.859 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.122 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.122 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.122 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.122 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.122 [88/268] Linking static target lib/librte_mempool.a 00:02:19.122 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.122 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.122 [91/268] Linking static target lib/librte_ring.a 00:02:19.122 [92/268] Linking static target lib/librte_eal.a 00:02:19.122 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.380 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.380 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.380 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.380 [97/268] Linking static target lib/librte_rcu.a 00:02:19.639 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.639 [99/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.639 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.639 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.639 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.898 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.898 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.898 [105/268] Linking static target lib/librte_mbuf.a 00:02:19.898 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.898 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.898 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.898 [109/268] Linking static target lib/librte_net.a 00:02:20.157 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:20.157 [111/268] Linking static target lib/librte_meter.a 00:02:20.157 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.157 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.157 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.157 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.415 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.415 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.415 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.674 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.674 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.934 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.934 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.934 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.193 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.193 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.193 [126/268] Linking static target lib/librte_pci.a 00:02:21.193 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.452 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.452 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.452 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.452 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.452 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.711 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.711 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:21.711 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.711 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.711 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.711 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.711 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.711 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.711 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.711 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.711 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.969 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.969 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:22.227 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.227 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.227 [148/268] Linking static target lib/librte_cmdline.a 00:02:22.227 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.227 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.486 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.486 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.486 [153/268] Linking static target lib/librte_timer.a 00:02:22.486 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.486 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.743 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.743 [157/268] Linking static target lib/librte_compressdev.a 00:02:23.002 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.002 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.002 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.002 [161/268] Linking static target lib/librte_ethdev.a 00:02:23.002 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.002 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:23.002 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.002 [165/268] Linking static target lib/librte_hash.a 00:02:23.261 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:23.261 [167/268] Linking static target lib/librte_dmadev.a 00:02:23.261 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.520 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.520 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.520 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.520 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:23.778 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.779 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.779 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.037 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.037 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.037 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.037 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.296 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.296 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.296 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.296 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.296 [184/268] Linking static target lib/librte_cryptodev.a 00:02:24.296 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.296 [186/268] Linking static target lib/librte_power.a 00:02:24.554 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.813 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.813 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.813 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.813 [191/268] Linking static target lib/librte_reorder.a 00:02:24.813 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.813 [193/268] Linking static target lib/librte_security.a 00:02:25.380 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.380 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.639 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.639 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.639 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.898 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.898 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.157 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.157 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.157 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.157 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.157 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.416 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.416 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.674 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.674 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.674 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.932 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.932 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.932 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.932 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.932 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:26.932 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.932 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.932 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.932 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.932 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:26.932 [221/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.191 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.191 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.191 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.191 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:27.191 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.450 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.827 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:31.441 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.441 [230/268] Linking static target lib/librte_vhost.a 00:02:32.007 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.007 [232/268] Linking target lib/librte_eal.so.24.1 00:02:32.265 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:32.265 [234/268] Linking target lib/librte_ring.so.24.1 00:02:32.265 [235/268] Linking target lib/librte_pci.so.24.1 00:02:32.265 [236/268] Linking target lib/librte_meter.so.24.1 00:02:32.265 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.265 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:32.265 [239/268] Linking target lib/librte_timer.so.24.1 00:02:32.265 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.266 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.266 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.266 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.266 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.266 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:32.266 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.266 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:32.524 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:32.524 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:32.524 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:32.524 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:32.524 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.524 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:32.782 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:32.782 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:32.782 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:32.782 [257/268] Linking target lib/librte_net.so.24.1 00:02:32.782 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:32.782 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:32.782 [260/268] Linking target lib/librte_security.so.24.1 00:02:32.782 [261/268] Linking target lib/librte_hash.so.24.1 00:02:33.040 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:33.041 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:33.041 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.041 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:33.041 [266/268] Linking target lib/librte_power.so.24.1 00:02:33.299 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.299 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:33.299 INFO: autodetecting backend as ninja 00:02:33.299 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:55.233 CC lib/ut/ut.o 00:02:55.233 CC lib/ut_mock/mock.o 00:02:55.233 CC lib/log/log.o 00:02:55.233 CC lib/log/log_flags.o 00:02:55.233 CC lib/log/log_deprecated.o 00:02:55.233 LIB libspdk_ut.a 00:02:55.233 LIB libspdk_ut_mock.a 00:02:55.233 LIB libspdk_log.a 00:02:55.233 SO libspdk_ut.so.2.0 00:02:55.233 SO libspdk_ut_mock.so.6.0 00:02:55.233 SO libspdk_log.so.7.1 00:02:55.233 SYMLINK libspdk_ut.so 00:02:55.233 SYMLINK libspdk_ut_mock.so 00:02:55.233 SYMLINK libspdk_log.so 00:02:55.233 CC lib/util/base64.o 00:02:55.233 CC lib/util/bit_array.o 00:02:55.233 CC lib/util/crc16.o 00:02:55.233 CC lib/util/cpuset.o 00:02:55.233 CC lib/util/crc32.o 00:02:55.233 CC lib/util/crc32c.o 00:02:55.233 CC lib/ioat/ioat.o 00:02:55.233 CXX lib/trace_parser/trace.o 00:02:55.233 CC lib/dma/dma.o 00:02:55.233 CC lib/util/crc32_ieee.o 00:02:55.233 CC lib/util/crc64.o 00:02:55.233 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.233 CC lib/util/dif.o 00:02:55.233 CC lib/util/fd.o 00:02:55.233 CC lib/util/fd_group.o 00:02:55.233 LIB libspdk_dma.a 00:02:55.233 SO libspdk_dma.so.5.0 00:02:55.233 CC lib/vfio_user/host/vfio_user.o 00:02:55.233 CC lib/util/file.o 00:02:55.233 CC lib/util/hexlify.o 00:02:55.233 LIB libspdk_ioat.a 00:02:55.233 SO libspdk_ioat.so.7.0 00:02:55.233 SYMLINK libspdk_dma.so 00:02:55.233 CC lib/util/iov.o 00:02:55.233 CC lib/util/math.o 00:02:55.233 SYMLINK libspdk_ioat.so 00:02:55.233 CC lib/util/net.o 00:02:55.233 CC lib/util/pipe.o 00:02:55.233 CC lib/util/strerror_tls.o 00:02:55.233 CC lib/util/string.o 00:02:55.233 LIB libspdk_vfio_user.a 00:02:55.233 CC lib/util/uuid.o 00:02:55.233 CC lib/util/xor.o 00:02:55.233 SO libspdk_vfio_user.so.5.0 00:02:55.233 CC lib/util/zipf.o 00:02:55.233 CC lib/util/md5.o 00:02:55.233 SYMLINK libspdk_vfio_user.so 00:02:55.233 LIB libspdk_util.a 00:02:55.233 SO libspdk_util.so.10.1 00:02:55.233 LIB libspdk_trace_parser.a 00:02:55.233 SO libspdk_trace_parser.so.6.0 00:02:55.233 SYMLINK libspdk_util.so 00:02:55.233 SYMLINK libspdk_trace_parser.so 00:02:55.233 CC lib/conf/conf.o 00:02:55.233 CC lib/vmd/vmd.o 00:02:55.233 CC lib/vmd/led.o 00:02:55.233 CC lib/json/json_parse.o 00:02:55.233 CC lib/json/json_write.o 00:02:55.233 CC lib/rdma_utils/rdma_utils.o 00:02:55.233 CC lib/json/json_util.o 00:02:55.233 CC lib/env_dpdk/env.o 00:02:55.233 CC lib/env_dpdk/memory.o 00:02:55.233 CC lib/idxd/idxd.o 00:02:55.233 CC lib/idxd/idxd_user.o 00:02:55.233 LIB libspdk_conf.a 00:02:55.233 CC lib/env_dpdk/pci.o 00:02:55.233 CC lib/env_dpdk/init.o 00:02:55.233 SO libspdk_conf.so.6.0 00:02:55.233 LIB libspdk_rdma_utils.a 00:02:55.233 SO libspdk_rdma_utils.so.1.0 00:02:55.233 LIB libspdk_json.a 00:02:55.233 SYMLINK libspdk_conf.so 00:02:55.233 CC lib/env_dpdk/threads.o 00:02:55.233 SO libspdk_json.so.6.0 00:02:55.233 SYMLINK libspdk_rdma_utils.so 00:02:55.233 CC lib/env_dpdk/pci_ioat.o 00:02:55.233 SYMLINK libspdk_json.so 00:02:55.233 CC lib/env_dpdk/pci_virtio.o 00:02:55.233 CC lib/env_dpdk/pci_vmd.o 00:02:55.233 CC lib/env_dpdk/pci_idxd.o 00:02:55.233 CC lib/env_dpdk/pci_event.o 00:02:55.233 CC lib/rdma_provider/common.o 00:02:55.233 CC lib/env_dpdk/sigbus_handler.o 00:02:55.233 CC lib/idxd/idxd_kernel.o 00:02:55.233 CC lib/env_dpdk/pci_dpdk.o 00:02:55.233 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.233 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.233 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.233 LIB libspdk_vmd.a 00:02:55.233 SO libspdk_vmd.so.6.0 00:02:55.233 LIB libspdk_idxd.a 00:02:55.233 SO libspdk_idxd.so.12.1 00:02:55.233 SYMLINK libspdk_vmd.so 00:02:55.233 LIB libspdk_rdma_provider.a 00:02:55.233 SYMLINK libspdk_idxd.so 00:02:55.233 SO libspdk_rdma_provider.so.7.0 00:02:55.233 SYMLINK libspdk_rdma_provider.so 00:02:55.233 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.233 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.233 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.233 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.492 LIB libspdk_jsonrpc.a 00:02:55.492 SO libspdk_jsonrpc.so.6.0 00:02:55.492 SYMLINK libspdk_jsonrpc.so 00:02:55.752 LIB libspdk_env_dpdk.a 00:02:55.752 SO libspdk_env_dpdk.so.15.1 00:02:56.011 SYMLINK libspdk_env_dpdk.so 00:02:56.011 CC lib/rpc/rpc.o 00:02:56.270 LIB libspdk_rpc.a 00:02:56.270 SO libspdk_rpc.so.6.0 00:02:56.270 SYMLINK libspdk_rpc.so 00:02:56.840 CC lib/keyring/keyring_rpc.o 00:02:56.840 CC lib/keyring/keyring.o 00:02:56.840 CC lib/trace/trace_flags.o 00:02:56.840 CC lib/trace/trace.o 00:02:56.840 CC lib/trace/trace_rpc.o 00:02:56.840 CC lib/notify/notify_rpc.o 00:02:56.840 CC lib/notify/notify.o 00:02:56.840 LIB libspdk_notify.a 00:02:57.099 LIB libspdk_keyring.a 00:02:57.099 SO libspdk_notify.so.6.0 00:02:57.099 SO libspdk_keyring.so.2.0 00:02:57.099 LIB libspdk_trace.a 00:02:57.099 SYMLINK libspdk_notify.so 00:02:57.099 SYMLINK libspdk_keyring.so 00:02:57.099 SO libspdk_trace.so.11.0 00:02:57.099 SYMLINK libspdk_trace.so 00:02:57.668 CC lib/thread/thread.o 00:02:57.668 CC lib/thread/iobuf.o 00:02:57.668 CC lib/sock/sock.o 00:02:57.668 CC lib/sock/sock_rpc.o 00:02:57.927 LIB libspdk_sock.a 00:02:58.186 SO libspdk_sock.so.10.0 00:02:58.186 SYMLINK libspdk_sock.so 00:02:58.753 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.753 CC lib/nvme/nvme_ctrlr.o 00:02:58.753 CC lib/nvme/nvme_ns_cmd.o 00:02:58.753 CC lib/nvme/nvme_fabric.o 00:02:58.753 CC lib/nvme/nvme_ns.o 00:02:58.753 CC lib/nvme/nvme_pcie_common.o 00:02:58.753 CC lib/nvme/nvme_pcie.o 00:02:58.753 CC lib/nvme/nvme_qpair.o 00:02:58.753 CC lib/nvme/nvme.o 00:02:59.321 CC lib/nvme/nvme_quirks.o 00:02:59.321 LIB libspdk_thread.a 00:02:59.321 CC lib/nvme/nvme_transport.o 00:02:59.321 SO libspdk_thread.so.11.0 00:02:59.321 CC lib/nvme/nvme_discovery.o 00:02:59.321 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.321 SYMLINK libspdk_thread.so 00:02:59.321 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.580 CC lib/accel/accel.o 00:02:59.580 CC lib/blob/blobstore.o 00:02:59.839 CC lib/blob/request.o 00:02:59.839 CC lib/init/json_config.o 00:02:59.839 CC lib/virtio/virtio.o 00:02:59.839 CC lib/virtio/virtio_vhost_user.o 00:02:59.839 CC lib/blob/zeroes.o 00:02:59.839 CC lib/blob/blob_bs_dev.o 00:03:00.096 CC lib/init/subsystem.o 00:03:00.096 CC lib/init/subsystem_rpc.o 00:03:00.096 CC lib/init/rpc.o 00:03:00.096 CC lib/accel/accel_rpc.o 00:03:00.354 CC lib/accel/accel_sw.o 00:03:00.354 CC lib/virtio/virtio_vfio_user.o 00:03:00.354 CC lib/virtio/virtio_pci.o 00:03:00.354 CC lib/nvme/nvme_tcp.o 00:03:00.354 CC lib/nvme/nvme_opal.o 00:03:00.354 LIB libspdk_init.a 00:03:00.354 SO libspdk_init.so.6.0 00:03:00.354 CC lib/nvme/nvme_io_msg.o 00:03:00.354 SYMLINK libspdk_init.so 00:03:00.613 CC lib/nvme/nvme_poll_group.o 00:03:00.613 LIB libspdk_virtio.a 00:03:00.613 CC lib/fsdev/fsdev.o 00:03:00.613 SO libspdk_virtio.so.7.0 00:03:00.613 CC lib/event/app.o 00:03:00.872 SYMLINK libspdk_virtio.so 00:03:00.872 CC lib/event/reactor.o 00:03:00.872 LIB libspdk_accel.a 00:03:00.872 CC lib/event/log_rpc.o 00:03:00.872 SO libspdk_accel.so.16.0 00:03:00.872 CC lib/fsdev/fsdev_io.o 00:03:00.872 SYMLINK libspdk_accel.so 00:03:00.872 CC lib/fsdev/fsdev_rpc.o 00:03:01.130 CC lib/event/app_rpc.o 00:03:01.131 CC lib/event/scheduler_static.o 00:03:01.131 CC lib/nvme/nvme_zns.o 00:03:01.131 CC lib/nvme/nvme_stubs.o 00:03:01.131 CC lib/nvme/nvme_auth.o 00:03:01.131 CC lib/nvme/nvme_cuse.o 00:03:01.131 CC lib/nvme/nvme_rdma.o 00:03:01.389 LIB libspdk_event.a 00:03:01.389 SO libspdk_event.so.14.0 00:03:01.389 LIB libspdk_fsdev.a 00:03:01.389 SO libspdk_fsdev.so.2.0 00:03:01.389 SYMLINK libspdk_event.so 00:03:01.389 SYMLINK libspdk_fsdev.so 00:03:01.389 CC lib/bdev/bdev.o 00:03:01.389 CC lib/bdev/bdev_rpc.o 00:03:01.647 CC lib/bdev/bdev_zone.o 00:03:01.648 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:01.648 CC lib/bdev/part.o 00:03:01.906 CC lib/bdev/scsi_nvme.o 00:03:02.472 LIB libspdk_fuse_dispatcher.a 00:03:02.472 SO libspdk_fuse_dispatcher.so.1.0 00:03:02.472 SYMLINK libspdk_fuse_dispatcher.so 00:03:02.731 LIB libspdk_nvme.a 00:03:02.990 SO libspdk_nvme.so.15.0 00:03:03.249 LIB libspdk_blob.a 00:03:03.249 SO libspdk_blob.so.12.0 00:03:03.249 SYMLINK libspdk_nvme.so 00:03:03.508 SYMLINK libspdk_blob.so 00:03:03.765 CC lib/blobfs/blobfs.o 00:03:03.765 CC lib/blobfs/tree.o 00:03:03.765 CC lib/lvol/lvol.o 00:03:04.699 LIB libspdk_bdev.a 00:03:04.699 SO libspdk_bdev.so.17.0 00:03:04.699 LIB libspdk_blobfs.a 00:03:04.699 SYMLINK libspdk_bdev.so 00:03:04.958 SO libspdk_blobfs.so.11.0 00:03:04.958 LIB libspdk_lvol.a 00:03:04.958 SO libspdk_lvol.so.11.0 00:03:04.958 SYMLINK libspdk_blobfs.so 00:03:04.958 SYMLINK libspdk_lvol.so 00:03:04.958 CC lib/nvmf/ctrlr_discovery.o 00:03:04.958 CC lib/nvmf/ctrlr.o 00:03:04.958 CC lib/nvmf/ctrlr_bdev.o 00:03:04.958 CC lib/nvmf/subsystem.o 00:03:04.958 CC lib/nvmf/nvmf.o 00:03:04.958 CC lib/nvmf/nvmf_rpc.o 00:03:04.958 CC lib/ublk/ublk.o 00:03:04.958 CC lib/ftl/ftl_core.o 00:03:04.958 CC lib/scsi/dev.o 00:03:04.958 CC lib/nbd/nbd.o 00:03:05.217 CC lib/scsi/lun.o 00:03:05.485 CC lib/ftl/ftl_init.o 00:03:05.485 CC lib/nbd/nbd_rpc.o 00:03:05.758 CC lib/ftl/ftl_layout.o 00:03:05.758 CC lib/ftl/ftl_debug.o 00:03:05.758 CC lib/scsi/port.o 00:03:05.758 LIB libspdk_nbd.a 00:03:05.758 SO libspdk_nbd.so.7.0 00:03:05.759 SYMLINK libspdk_nbd.so 00:03:05.759 CC lib/ftl/ftl_io.o 00:03:05.759 CC lib/scsi/scsi.o 00:03:05.759 CC lib/ublk/ublk_rpc.o 00:03:06.017 CC lib/ftl/ftl_sb.o 00:03:06.017 CC lib/ftl/ftl_l2p.o 00:03:06.017 CC lib/nvmf/transport.o 00:03:06.017 CC lib/scsi/scsi_bdev.o 00:03:06.017 CC lib/nvmf/tcp.o 00:03:06.017 LIB libspdk_ublk.a 00:03:06.017 SO libspdk_ublk.so.3.0 00:03:06.017 CC lib/nvmf/stubs.o 00:03:06.017 CC lib/scsi/scsi_pr.o 00:03:06.017 CC lib/scsi/scsi_rpc.o 00:03:06.017 CC lib/ftl/ftl_l2p_flat.o 00:03:06.017 SYMLINK libspdk_ublk.so 00:03:06.275 CC lib/ftl/ftl_nv_cache.o 00:03:06.275 CC lib/ftl/ftl_band.o 00:03:06.275 CC lib/scsi/task.o 00:03:06.533 CC lib/nvmf/mdns_server.o 00:03:06.533 CC lib/nvmf/rdma.o 00:03:06.533 CC lib/nvmf/auth.o 00:03:06.533 CC lib/ftl/ftl_band_ops.o 00:03:06.533 LIB libspdk_scsi.a 00:03:06.533 SO libspdk_scsi.so.9.0 00:03:06.792 CC lib/ftl/ftl_writer.o 00:03:06.792 CC lib/ftl/ftl_rq.o 00:03:06.792 SYMLINK libspdk_scsi.so 00:03:06.792 CC lib/ftl/ftl_reloc.o 00:03:07.050 CC lib/ftl/ftl_l2p_cache.o 00:03:07.050 CC lib/ftl/ftl_p2l.o 00:03:07.050 CC lib/ftl/ftl_p2l_log.o 00:03:07.050 CC lib/iscsi/conn.o 00:03:07.050 CC lib/vhost/vhost.o 00:03:07.308 CC lib/vhost/vhost_rpc.o 00:03:07.308 CC lib/vhost/vhost_scsi.o 00:03:07.308 CC lib/vhost/vhost_blk.o 00:03:07.308 CC lib/vhost/rte_vhost_user.o 00:03:07.567 CC lib/ftl/mngt/ftl_mngt.o 00:03:07.567 CC lib/iscsi/init_grp.o 00:03:07.825 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.825 CC lib/iscsi/iscsi.o 00:03:07.825 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.825 CC lib/iscsi/param.o 00:03:07.825 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.083 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.083 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.083 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.083 CC lib/iscsi/portal_grp.o 00:03:08.083 CC lib/iscsi/tgt_node.o 00:03:08.342 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.342 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.342 CC lib/iscsi/iscsi_subsystem.o 00:03:08.342 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.342 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.342 CC lib/iscsi/iscsi_rpc.o 00:03:08.342 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.600 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.600 LIB libspdk_vhost.a 00:03:08.600 CC lib/ftl/utils/ftl_conf.o 00:03:08.600 SO libspdk_vhost.so.8.0 00:03:08.600 CC lib/iscsi/task.o 00:03:08.600 CC lib/ftl/utils/ftl_md.o 00:03:08.600 SYMLINK libspdk_vhost.so 00:03:08.600 CC lib/ftl/utils/ftl_mempool.o 00:03:08.600 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.600 CC lib/ftl/utils/ftl_property.o 00:03:08.859 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.859 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.859 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.859 LIB libspdk_nvmf.a 00:03:08.859 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.859 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.859 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.859 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.118 SO libspdk_nvmf.so.20.0 00:03:09.118 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.118 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.118 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.118 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.118 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:09.118 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:09.118 CC lib/ftl/base/ftl_base_dev.o 00:03:09.118 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.118 CC lib/ftl/ftl_trace.o 00:03:09.377 SYMLINK libspdk_nvmf.so 00:03:09.377 LIB libspdk_iscsi.a 00:03:09.377 LIB libspdk_ftl.a 00:03:09.636 SO libspdk_iscsi.so.8.0 00:03:09.636 SYMLINK libspdk_iscsi.so 00:03:09.895 SO libspdk_ftl.so.9.0 00:03:10.153 SYMLINK libspdk_ftl.so 00:03:10.413 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.672 CC module/fsdev/aio/fsdev_aio.o 00:03:10.672 CC module/accel/error/accel_error.o 00:03:10.672 CC module/accel/ioat/accel_ioat.o 00:03:10.672 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.672 CC module/sock/posix/posix.o 00:03:10.672 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.672 CC module/keyring/file/keyring.o 00:03:10.672 CC module/blob/bdev/blob_bdev.o 00:03:10.672 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.672 LIB libspdk_env_dpdk_rpc.a 00:03:10.672 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.672 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.672 CC module/keyring/file/keyring_rpc.o 00:03:10.672 LIB libspdk_scheduler_gscheduler.a 00:03:10.931 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.931 SO libspdk_scheduler_gscheduler.so.4.0 00:03:10.931 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.931 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.931 CC module/accel/error/accel_error_rpc.o 00:03:10.931 LIB libspdk_scheduler_dynamic.a 00:03:10.931 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.931 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.931 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.931 LIB libspdk_blob_bdev.a 00:03:10.931 LIB libspdk_keyring_file.a 00:03:10.931 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.931 SO libspdk_blob_bdev.so.12.0 00:03:10.931 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.931 LIB libspdk_accel_ioat.a 00:03:10.931 SO libspdk_keyring_file.so.2.0 00:03:10.932 LIB libspdk_accel_error.a 00:03:10.932 CC module/keyring/linux/keyring.o 00:03:10.932 SO libspdk_accel_ioat.so.6.0 00:03:10.932 SO libspdk_accel_error.so.2.0 00:03:11.191 SYMLINK libspdk_blob_bdev.so 00:03:11.191 SYMLINK libspdk_keyring_file.so 00:03:11.191 SYMLINK libspdk_accel_error.so 00:03:11.191 CC module/fsdev/aio/linux_aio_mgr.o 00:03:11.191 SYMLINK libspdk_accel_ioat.so 00:03:11.191 CC module/keyring/linux/keyring_rpc.o 00:03:11.191 CC module/accel/dsa/accel_dsa.o 00:03:11.191 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.191 CC module/accel/iaa/accel_iaa.o 00:03:11.191 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.191 LIB libspdk_keyring_linux.a 00:03:11.191 SO libspdk_keyring_linux.so.1.0 00:03:11.450 LIB libspdk_fsdev_aio.a 00:03:11.450 SYMLINK libspdk_keyring_linux.so 00:03:11.450 CC module/bdev/delay/vbdev_delay.o 00:03:11.450 LIB libspdk_accel_iaa.a 00:03:11.450 SO libspdk_fsdev_aio.so.1.0 00:03:11.450 CC module/blobfs/bdev/blobfs_bdev.o 00:03:11.450 LIB libspdk_sock_posix.a 00:03:11.450 SO libspdk_accel_iaa.so.3.0 00:03:11.450 LIB libspdk_accel_dsa.a 00:03:11.450 SO libspdk_sock_posix.so.6.0 00:03:11.450 SYMLINK libspdk_fsdev_aio.so 00:03:11.450 SO libspdk_accel_dsa.so.5.0 00:03:11.450 CC module/bdev/error/vbdev_error.o 00:03:11.450 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.450 CC module/bdev/gpt/gpt.o 00:03:11.450 SYMLINK libspdk_accel_iaa.so 00:03:11.450 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.450 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.450 SYMLINK libspdk_sock_posix.so 00:03:11.450 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.450 CC module/bdev/malloc/bdev_malloc.o 00:03:11.450 SYMLINK libspdk_accel_dsa.so 00:03:11.450 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.450 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.709 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.709 LIB libspdk_blobfs_bdev.a 00:03:11.709 LIB libspdk_bdev_gpt.a 00:03:11.709 SO libspdk_blobfs_bdev.so.6.0 00:03:11.709 LIB libspdk_bdev_error.a 00:03:11.709 SO libspdk_bdev_gpt.so.6.0 00:03:11.709 SO libspdk_bdev_error.so.6.0 00:03:11.969 LIB libspdk_bdev_delay.a 00:03:11.969 CC module/bdev/null/bdev_null.o 00:03:11.969 SYMLINK libspdk_blobfs_bdev.so 00:03:11.969 SYMLINK libspdk_bdev_gpt.so 00:03:11.969 SO libspdk_bdev_delay.so.6.0 00:03:11.969 CC module/bdev/null/bdev_null_rpc.o 00:03:11.969 CC module/bdev/nvme/bdev_nvme.o 00:03:11.969 SYMLINK libspdk_bdev_error.so 00:03:11.969 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:11.969 CC module/bdev/nvme/nvme_rpc.o 00:03:11.969 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.969 LIB libspdk_bdev_malloc.a 00:03:11.969 SYMLINK libspdk_bdev_delay.so 00:03:11.969 SO libspdk_bdev_malloc.so.6.0 00:03:11.969 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.969 LIB libspdk_bdev_lvol.a 00:03:11.969 SYMLINK libspdk_bdev_malloc.so 00:03:11.969 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.228 CC module/bdev/raid/bdev_raid.o 00:03:12.228 SO libspdk_bdev_lvol.so.6.0 00:03:12.228 LIB libspdk_bdev_null.a 00:03:12.228 CC module/bdev/nvme/vbdev_opal.o 00:03:12.228 SO libspdk_bdev_null.so.6.0 00:03:12.228 SYMLINK libspdk_bdev_lvol.so 00:03:12.228 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.228 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.228 CC module/bdev/split/vbdev_split.o 00:03:12.228 CC module/bdev/raid/raid0.o 00:03:12.228 SYMLINK libspdk_bdev_null.so 00:03:12.228 LIB libspdk_bdev_passthru.a 00:03:12.228 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.228 SO libspdk_bdev_passthru.so.6.0 00:03:12.488 SYMLINK libspdk_bdev_passthru.so 00:03:12.488 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.488 CC module/bdev/raid/raid1.o 00:03:12.488 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.488 CC module/bdev/raid/concat.o 00:03:12.488 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.747 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.747 CC module/bdev/xnvme/bdev_xnvme.o 00:03:12.747 LIB libspdk_bdev_split.a 00:03:12.747 SO libspdk_bdev_split.so.6.0 00:03:12.747 CC module/bdev/aio/bdev_aio.o 00:03:12.747 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.747 CC module/bdev/ftl/bdev_ftl.o 00:03:12.747 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.747 SYMLINK libspdk_bdev_split.so 00:03:12.747 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:13.006 LIB libspdk_bdev_zone_block.a 00:03:13.006 LIB libspdk_bdev_xnvme.a 00:03:13.006 SO libspdk_bdev_zone_block.so.6.0 00:03:13.006 SO libspdk_bdev_xnvme.so.3.0 00:03:13.006 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.006 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.006 SYMLINK libspdk_bdev_zone_block.so 00:03:13.006 SYMLINK libspdk_bdev_xnvme.so 00:03:13.006 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.006 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.006 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.006 LIB libspdk_bdev_ftl.a 00:03:13.006 LIB libspdk_bdev_aio.a 00:03:13.264 SO libspdk_bdev_ftl.so.6.0 00:03:13.264 SO libspdk_bdev_aio.so.6.0 00:03:13.264 SYMLINK libspdk_bdev_ftl.so 00:03:13.264 SYMLINK libspdk_bdev_aio.so 00:03:13.264 LIB libspdk_bdev_raid.a 00:03:13.264 SO libspdk_bdev_raid.so.6.0 00:03:13.264 LIB libspdk_bdev_iscsi.a 00:03:13.522 SO libspdk_bdev_iscsi.so.6.0 00:03:13.522 SYMLINK libspdk_bdev_raid.so 00:03:13.522 SYMLINK libspdk_bdev_iscsi.so 00:03:13.522 LIB libspdk_bdev_virtio.a 00:03:13.781 SO libspdk_bdev_virtio.so.6.0 00:03:13.781 SYMLINK libspdk_bdev_virtio.so 00:03:15.158 LIB libspdk_bdev_nvme.a 00:03:15.158 SO libspdk_bdev_nvme.so.7.1 00:03:15.158 SYMLINK libspdk_bdev_nvme.so 00:03:15.726 CC module/event/subsystems/vmd/vmd.o 00:03:15.726 CC module/event/subsystems/keyring/keyring.o 00:03:15.726 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.726 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.726 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.726 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.726 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.726 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.726 CC module/event/subsystems/sock/sock.o 00:03:15.985 LIB libspdk_event_scheduler.a 00:03:15.985 LIB libspdk_event_sock.a 00:03:15.985 LIB libspdk_event_fsdev.a 00:03:15.985 LIB libspdk_event_vmd.a 00:03:15.985 LIB libspdk_event_keyring.a 00:03:15.985 LIB libspdk_event_vhost_blk.a 00:03:15.985 LIB libspdk_event_iobuf.a 00:03:15.985 SO libspdk_event_sock.so.5.0 00:03:15.985 SO libspdk_event_scheduler.so.4.0 00:03:15.985 SO libspdk_event_fsdev.so.1.0 00:03:15.985 SO libspdk_event_keyring.so.1.0 00:03:15.985 SO libspdk_event_vhost_blk.so.3.0 00:03:15.985 SO libspdk_event_vmd.so.6.0 00:03:15.985 SO libspdk_event_iobuf.so.3.0 00:03:15.985 SYMLINK libspdk_event_sock.so 00:03:15.985 SYMLINK libspdk_event_scheduler.so 00:03:15.985 SYMLINK libspdk_event_keyring.so 00:03:15.985 SYMLINK libspdk_event_fsdev.so 00:03:15.985 SYMLINK libspdk_event_vhost_blk.so 00:03:15.985 SYMLINK libspdk_event_vmd.so 00:03:15.985 SYMLINK libspdk_event_iobuf.so 00:03:16.553 CC module/event/subsystems/accel/accel.o 00:03:16.812 LIB libspdk_event_accel.a 00:03:16.812 SO libspdk_event_accel.so.6.0 00:03:16.812 SYMLINK libspdk_event_accel.so 00:03:17.381 CC module/event/subsystems/bdev/bdev.o 00:03:17.381 LIB libspdk_event_bdev.a 00:03:17.639 SO libspdk_event_bdev.so.6.0 00:03:17.639 SYMLINK libspdk_event_bdev.so 00:03:17.898 CC module/event/subsystems/ublk/ublk.o 00:03:17.898 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.898 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.898 CC module/event/subsystems/scsi/scsi.o 00:03:17.898 CC module/event/subsystems/nbd/nbd.o 00:03:18.157 LIB libspdk_event_nbd.a 00:03:18.157 LIB libspdk_event_ublk.a 00:03:18.157 LIB libspdk_event_scsi.a 00:03:18.157 SO libspdk_event_nbd.so.6.0 00:03:18.157 SO libspdk_event_ublk.so.3.0 00:03:18.157 SO libspdk_event_scsi.so.6.0 00:03:18.157 LIB libspdk_event_nvmf.a 00:03:18.157 SYMLINK libspdk_event_nbd.so 00:03:18.157 SYMLINK libspdk_event_scsi.so 00:03:18.157 SYMLINK libspdk_event_ublk.so 00:03:18.416 SO libspdk_event_nvmf.so.6.0 00:03:18.416 SYMLINK libspdk_event_nvmf.so 00:03:18.674 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.674 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.674 LIB libspdk_event_vhost_scsi.a 00:03:18.932 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.932 LIB libspdk_event_iscsi.a 00:03:18.932 SO libspdk_event_iscsi.so.6.0 00:03:18.932 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.932 SYMLINK libspdk_event_iscsi.so 00:03:19.190 SO libspdk.so.6.0 00:03:19.190 SYMLINK libspdk.so 00:03:19.448 CC app/trace_record/trace_record.o 00:03:19.448 TEST_HEADER include/spdk/accel.h 00:03:19.448 CXX app/trace/trace.o 00:03:19.448 TEST_HEADER include/spdk/accel_module.h 00:03:19.448 CC test/rpc_client/rpc_client_test.o 00:03:19.448 TEST_HEADER include/spdk/assert.h 00:03:19.448 TEST_HEADER include/spdk/barrier.h 00:03:19.448 TEST_HEADER include/spdk/base64.h 00:03:19.448 TEST_HEADER include/spdk/bdev.h 00:03:19.448 TEST_HEADER include/spdk/bdev_module.h 00:03:19.448 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.448 TEST_HEADER include/spdk/bit_array.h 00:03:19.448 TEST_HEADER include/spdk/bit_pool.h 00:03:19.745 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.745 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.745 TEST_HEADER include/spdk/blobfs.h 00:03:19.745 TEST_HEADER include/spdk/blob.h 00:03:19.745 TEST_HEADER include/spdk/conf.h 00:03:19.745 TEST_HEADER include/spdk/config.h 00:03:19.745 TEST_HEADER include/spdk/cpuset.h 00:03:19.745 TEST_HEADER include/spdk/crc16.h 00:03:19.745 TEST_HEADER include/spdk/crc32.h 00:03:19.745 TEST_HEADER include/spdk/crc64.h 00:03:19.745 TEST_HEADER include/spdk/dif.h 00:03:19.745 TEST_HEADER include/spdk/dma.h 00:03:19.745 TEST_HEADER include/spdk/endian.h 00:03:19.745 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.745 TEST_HEADER include/spdk/env.h 00:03:19.745 TEST_HEADER include/spdk/event.h 00:03:19.745 TEST_HEADER include/spdk/fd_group.h 00:03:19.745 TEST_HEADER include/spdk/fd.h 00:03:19.745 TEST_HEADER include/spdk/file.h 00:03:19.745 TEST_HEADER include/spdk/fsdev.h 00:03:19.745 TEST_HEADER include/spdk/fsdev_module.h 00:03:19.745 TEST_HEADER include/spdk/ftl.h 00:03:19.745 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.745 TEST_HEADER include/spdk/hexlify.h 00:03:19.745 TEST_HEADER include/spdk/histogram_data.h 00:03:19.745 TEST_HEADER include/spdk/idxd.h 00:03:19.745 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.745 TEST_HEADER include/spdk/init.h 00:03:19.745 TEST_HEADER include/spdk/ioat.h 00:03:19.745 CC examples/ioat/perf/perf.o 00:03:19.745 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.745 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.745 TEST_HEADER include/spdk/json.h 00:03:19.745 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.745 TEST_HEADER include/spdk/keyring.h 00:03:19.745 CC test/thread/poller_perf/poller_perf.o 00:03:19.745 TEST_HEADER include/spdk/keyring_module.h 00:03:19.745 TEST_HEADER include/spdk/likely.h 00:03:19.745 TEST_HEADER include/spdk/log.h 00:03:19.745 TEST_HEADER include/spdk/lvol.h 00:03:19.745 TEST_HEADER include/spdk/md5.h 00:03:19.745 CC examples/util/zipf/zipf.o 00:03:19.745 TEST_HEADER include/spdk/memory.h 00:03:19.745 TEST_HEADER include/spdk/mmio.h 00:03:19.745 TEST_HEADER include/spdk/nbd.h 00:03:19.745 TEST_HEADER include/spdk/net.h 00:03:19.745 TEST_HEADER include/spdk/notify.h 00:03:19.745 TEST_HEADER include/spdk/nvme.h 00:03:19.745 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.745 CC test/dma/test_dma/test_dma.o 00:03:19.745 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.745 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.745 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.745 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.745 CC test/app/bdev_svc/bdev_svc.o 00:03:19.745 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.745 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.745 TEST_HEADER include/spdk/nvmf.h 00:03:19.745 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.745 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.745 TEST_HEADER include/spdk/opal.h 00:03:19.745 TEST_HEADER include/spdk/opal_spec.h 00:03:19.745 TEST_HEADER include/spdk/pci_ids.h 00:03:19.745 TEST_HEADER include/spdk/pipe.h 00:03:19.745 TEST_HEADER include/spdk/queue.h 00:03:19.745 TEST_HEADER include/spdk/reduce.h 00:03:19.745 TEST_HEADER include/spdk/rpc.h 00:03:19.745 TEST_HEADER include/spdk/scheduler.h 00:03:19.745 TEST_HEADER include/spdk/scsi.h 00:03:19.745 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.745 TEST_HEADER include/spdk/sock.h 00:03:19.745 TEST_HEADER include/spdk/stdinc.h 00:03:19.745 TEST_HEADER include/spdk/string.h 00:03:19.745 TEST_HEADER include/spdk/thread.h 00:03:19.745 TEST_HEADER include/spdk/trace.h 00:03:19.745 TEST_HEADER include/spdk/trace_parser.h 00:03:19.745 TEST_HEADER include/spdk/tree.h 00:03:19.745 TEST_HEADER include/spdk/ublk.h 00:03:19.745 TEST_HEADER include/spdk/util.h 00:03:19.745 TEST_HEADER include/spdk/uuid.h 00:03:19.745 TEST_HEADER include/spdk/version.h 00:03:19.745 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.745 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.745 LINK rpc_client_test 00:03:19.745 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.746 TEST_HEADER include/spdk/vhost.h 00:03:19.746 TEST_HEADER include/spdk/vmd.h 00:03:19.746 TEST_HEADER include/spdk/xor.h 00:03:19.746 TEST_HEADER include/spdk/zipf.h 00:03:19.746 CXX test/cpp_headers/accel.o 00:03:19.746 LINK poller_perf 00:03:19.746 LINK spdk_trace_record 00:03:19.746 LINK zipf 00:03:20.004 LINK bdev_svc 00:03:20.004 CXX test/cpp_headers/accel_module.o 00:03:20.005 LINK ioat_perf 00:03:20.005 LINK spdk_trace 00:03:20.005 CXX test/cpp_headers/assert.o 00:03:20.005 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:20.005 CC test/app/histogram_perf/histogram_perf.o 00:03:20.005 CXX test/cpp_headers/barrier.o 00:03:20.005 CXX test/cpp_headers/base64.o 00:03:20.005 CC examples/ioat/verify/verify.o 00:03:20.264 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.264 CC app/nvmf_tgt/nvmf_main.o 00:03:20.264 LINK test_dma 00:03:20.264 LINK histogram_perf 00:03:20.264 LINK interrupt_tgt 00:03:20.264 LINK mem_callbacks 00:03:20.264 CXX test/cpp_headers/bdev.o 00:03:20.264 LINK verify 00:03:20.264 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.264 LINK nvmf_tgt 00:03:20.522 CC app/spdk_tgt/spdk_tgt.o 00:03:20.522 CXX test/cpp_headers/bdev_module.o 00:03:20.522 CC app/spdk_lspci/spdk_lspci.o 00:03:20.522 CC test/env/vtophys/vtophys.o 00:03:20.522 CC app/spdk_nvme_perf/perf.o 00:03:20.523 LINK iscsi_tgt 00:03:20.523 LINK spdk_lspci 00:03:20.523 CC examples/thread/thread/thread_ex.o 00:03:20.523 LINK nvme_fuzz 00:03:20.523 LINK vtophys 00:03:20.523 LINK spdk_tgt 00:03:20.782 CXX test/cpp_headers/bdev_zone.o 00:03:20.782 CC test/event/event_perf/event_perf.o 00:03:20.782 CC examples/sock/hello_world/hello_sock.o 00:03:20.782 LINK event_perf 00:03:20.782 CXX test/cpp_headers/bit_array.o 00:03:20.782 LINK thread 00:03:20.782 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.782 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.782 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:21.040 LINK hello_sock 00:03:21.040 CC examples/vmd/lsvmd/lsvmd.o 00:03:21.040 CXX test/cpp_headers/bit_pool.o 00:03:21.040 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.040 CC examples/idxd/perf/perf.o 00:03:21.040 LINK env_dpdk_post_init 00:03:21.040 CC test/event/reactor/reactor.o 00:03:21.040 LINK lsvmd 00:03:21.040 CC test/event/reactor_perf/reactor_perf.o 00:03:21.299 CXX test/cpp_headers/blob_bdev.o 00:03:21.299 CC test/event/app_repeat/app_repeat.o 00:03:21.299 LINK reactor 00:03:21.299 LINK reactor_perf 00:03:21.299 CC test/env/memory/memory_ut.o 00:03:21.299 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.299 LINK app_repeat 00:03:21.299 LINK spdk_nvme_perf 00:03:21.299 LINK idxd_perf 00:03:21.299 CC examples/vmd/led/led.o 00:03:21.558 LINK vhost_fuzz 00:03:21.558 CC test/event/scheduler/scheduler.o 00:03:21.558 CXX test/cpp_headers/blobfs.o 00:03:21.558 LINK led 00:03:21.558 CC app/spdk_nvme_identify/identify.o 00:03:21.558 CC test/nvme/aer/aer.o 00:03:21.817 CXX test/cpp_headers/blob.o 00:03:21.817 LINK scheduler 00:03:21.817 CC test/accel/dif/dif.o 00:03:21.817 CC test/blobfs/mkfs/mkfs.o 00:03:21.817 CXX test/cpp_headers/conf.o 00:03:21.817 CC examples/accel/perf/accel_perf.o 00:03:21.817 CC test/lvol/esnap/esnap.o 00:03:22.076 LINK aer 00:03:22.076 LINK mkfs 00:03:22.076 CXX test/cpp_headers/config.o 00:03:22.076 CXX test/cpp_headers/cpuset.o 00:03:22.076 CC examples/blob/hello_world/hello_blob.o 00:03:22.336 CXX test/cpp_headers/crc16.o 00:03:22.336 CC test/nvme/reset/reset.o 00:03:22.336 CXX test/cpp_headers/crc32.o 00:03:22.336 CC examples/nvme/hello_world/hello_world.o 00:03:22.336 LINK hello_blob 00:03:22.336 LINK accel_perf 00:03:22.595 LINK memory_ut 00:03:22.595 LINK spdk_nvme_identify 00:03:22.595 LINK dif 00:03:22.595 LINK reset 00:03:22.595 CXX test/cpp_headers/crc64.o 00:03:22.595 LINK hello_world 00:03:22.595 LINK iscsi_fuzz 00:03:22.595 CC examples/nvme/reconnect/reconnect.o 00:03:22.595 CXX test/cpp_headers/dif.o 00:03:22.595 CC examples/blob/cli/blobcli.o 00:03:22.854 CC test/nvme/sgl/sgl.o 00:03:22.855 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.855 CC test/nvme/e2edp/nvme_dp.o 00:03:22.855 CC test/env/pci/pci_ut.o 00:03:22.855 CXX test/cpp_headers/dma.o 00:03:22.855 LINK spdk_nvme_discover 00:03:22.855 CC test/bdev/bdevio/bdevio.o 00:03:22.855 CC test/app/jsoncat/jsoncat.o 00:03:23.113 LINK reconnect 00:03:23.113 CXX test/cpp_headers/endian.o 00:03:23.113 LINK sgl 00:03:23.113 LINK nvme_dp 00:03:23.113 LINK jsoncat 00:03:23.113 CXX test/cpp_headers/env_dpdk.o 00:03:23.113 CC app/spdk_top/spdk_top.o 00:03:23.372 LINK pci_ut 00:03:23.372 LINK blobcli 00:03:23.372 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.372 CC examples/nvme/arbitration/arbitration.o 00:03:23.372 CC test/nvme/overhead/overhead.o 00:03:23.372 CXX test/cpp_headers/env.o 00:03:23.372 LINK bdevio 00:03:23.372 CC test/app/stub/stub.o 00:03:23.630 CXX test/cpp_headers/event.o 00:03:23.630 CXX test/cpp_headers/fd_group.o 00:03:23.630 CXX test/cpp_headers/fd.o 00:03:23.630 LINK stub 00:03:23.630 LINK arbitration 00:03:23.630 LINK overhead 00:03:23.630 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.888 CXX test/cpp_headers/file.o 00:03:23.888 CXX test/cpp_headers/fsdev.o 00:03:23.888 CC examples/nvme/hotplug/hotplug.o 00:03:23.888 LINK nvme_manage 00:03:23.888 CC test/nvme/err_injection/err_injection.o 00:03:23.888 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.888 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.888 CXX test/cpp_headers/fsdev_module.o 00:03:24.146 LINK hello_fsdev 00:03:24.146 CXX test/cpp_headers/ftl.o 00:03:24.146 LINK err_injection 00:03:24.146 CC app/vhost/vhost.o 00:03:24.146 LINK hotplug 00:03:24.147 CXX test/cpp_headers/gpt_spec.o 00:03:24.147 LINK spdk_top 00:03:24.147 LINK hello_bdev 00:03:24.405 LINK vhost 00:03:24.405 CXX test/cpp_headers/hexlify.o 00:03:24.405 CXX test/cpp_headers/histogram_data.o 00:03:24.405 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.405 CC app/spdk_dd/spdk_dd.o 00:03:24.405 CC test/nvme/startup/startup.o 00:03:24.405 CC examples/nvme/abort/abort.o 00:03:24.405 CC app/fio/nvme/fio_plugin.o 00:03:24.405 CXX test/cpp_headers/idxd.o 00:03:24.663 CC app/fio/bdev/fio_plugin.o 00:03:24.663 LINK cmb_copy 00:03:24.663 LINK startup 00:03:24.663 CC test/nvme/reserve/reserve.o 00:03:24.663 CXX test/cpp_headers/idxd_spec.o 00:03:24.663 CXX test/cpp_headers/init.o 00:03:24.663 LINK spdk_dd 00:03:24.922 LINK abort 00:03:24.922 LINK bdevperf 00:03:24.922 LINK reserve 00:03:24.922 CXX test/cpp_headers/ioat.o 00:03:24.922 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.922 CC test/nvme/simple_copy/simple_copy.o 00:03:25.181 CC test/nvme/connect_stress/connect_stress.o 00:03:25.181 LINK spdk_nvme 00:03:25.181 CXX test/cpp_headers/ioat_spec.o 00:03:25.181 LINK spdk_bdev 00:03:25.181 CXX test/cpp_headers/iscsi_spec.o 00:03:25.181 LINK pmr_persistence 00:03:25.181 CC test/nvme/boot_partition/boot_partition.o 00:03:25.181 CC test/nvme/compliance/nvme_compliance.o 00:03:25.181 LINK simple_copy 00:03:25.181 CXX test/cpp_headers/json.o 00:03:25.181 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.181 LINK connect_stress 00:03:25.439 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.439 CC test/nvme/fdp/fdp.o 00:03:25.439 LINK boot_partition 00:03:25.439 CXX test/cpp_headers/jsonrpc.o 00:03:25.439 LINK fused_ordering 00:03:25.439 CXX test/cpp_headers/keyring.o 00:03:25.439 CC test/nvme/cuse/cuse.o 00:03:25.439 LINK doorbell_aers 00:03:25.439 LINK nvme_compliance 00:03:25.699 CC examples/nvmf/nvmf/nvmf.o 00:03:25.699 CXX test/cpp_headers/keyring_module.o 00:03:25.699 CXX test/cpp_headers/likely.o 00:03:25.699 CXX test/cpp_headers/log.o 00:03:25.699 CXX test/cpp_headers/lvol.o 00:03:25.699 CXX test/cpp_headers/md5.o 00:03:25.699 LINK fdp 00:03:25.699 CXX test/cpp_headers/memory.o 00:03:25.699 CXX test/cpp_headers/mmio.o 00:03:25.699 CXX test/cpp_headers/nbd.o 00:03:25.699 CXX test/cpp_headers/net.o 00:03:25.958 CXX test/cpp_headers/notify.o 00:03:25.958 CXX test/cpp_headers/nvme.o 00:03:25.958 CXX test/cpp_headers/nvme_intel.o 00:03:25.958 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.958 LINK nvmf 00:03:25.958 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.958 CXX test/cpp_headers/nvme_spec.o 00:03:25.958 CXX test/cpp_headers/nvme_zns.o 00:03:25.958 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.958 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.958 CXX test/cpp_headers/nvmf.o 00:03:25.958 CXX test/cpp_headers/nvmf_spec.o 00:03:26.217 CXX test/cpp_headers/nvmf_transport.o 00:03:26.217 CXX test/cpp_headers/opal.o 00:03:26.217 CXX test/cpp_headers/opal_spec.o 00:03:26.217 CXX test/cpp_headers/pci_ids.o 00:03:26.217 CXX test/cpp_headers/pipe.o 00:03:26.217 CXX test/cpp_headers/queue.o 00:03:26.217 CXX test/cpp_headers/reduce.o 00:03:26.217 CXX test/cpp_headers/rpc.o 00:03:26.217 CXX test/cpp_headers/scheduler.o 00:03:26.476 CXX test/cpp_headers/scsi.o 00:03:26.476 CXX test/cpp_headers/scsi_spec.o 00:03:26.476 CXX test/cpp_headers/sock.o 00:03:26.476 CXX test/cpp_headers/stdinc.o 00:03:26.476 CXX test/cpp_headers/string.o 00:03:26.476 CXX test/cpp_headers/thread.o 00:03:26.477 CXX test/cpp_headers/trace.o 00:03:26.477 CXX test/cpp_headers/trace_parser.o 00:03:26.477 CXX test/cpp_headers/tree.o 00:03:26.477 CXX test/cpp_headers/ublk.o 00:03:26.477 CXX test/cpp_headers/util.o 00:03:26.477 CXX test/cpp_headers/uuid.o 00:03:26.477 CXX test/cpp_headers/version.o 00:03:26.477 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.735 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.735 CXX test/cpp_headers/vhost.o 00:03:26.736 CXX test/cpp_headers/vmd.o 00:03:26.736 CXX test/cpp_headers/xor.o 00:03:26.736 CXX test/cpp_headers/zipf.o 00:03:26.736 LINK cuse 00:03:28.647 LINK esnap 00:03:28.647 00:03:28.647 real 1m25.732s 00:03:28.647 user 7m15.450s 00:03:28.647 sys 1m59.667s 00:03:28.647 13:01:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:28.647 13:01:20 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.647 ************************************ 00:03:28.647 END TEST make 00:03:28.647 ************************************ 00:03:28.647 13:01:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.647 13:01:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.647 13:01:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.647 13:01:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.647 13:01:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.647 13:01:20 -- pm/common@44 -- $ pid=5289 00:03:28.647 13:01:20 -- pm/common@50 -- $ kill -TERM 5289 00:03:28.647 13:01:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.647 13:01:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.647 13:01:20 -- pm/common@44 -- $ pid=5290 00:03:28.647 13:01:20 -- pm/common@50 -- $ kill -TERM 5290 00:03:28.647 13:01:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:28.647 13:01:20 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:28.907 13:01:20 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:28.907 13:01:20 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:28.907 13:01:20 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:28.907 13:01:20 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:28.907 13:01:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.907 13:01:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.907 13:01:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.907 13:01:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.907 13:01:20 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.907 13:01:20 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.907 13:01:20 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.907 13:01:20 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.907 13:01:20 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.907 13:01:20 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.907 13:01:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.907 13:01:20 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.907 13:01:20 -- scripts/common.sh@345 -- # : 1 00:03:28.907 13:01:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.907 13:01:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.907 13:01:20 -- scripts/common.sh@365 -- # decimal 1 00:03:28.907 13:01:20 -- scripts/common.sh@353 -- # local d=1 00:03:28.907 13:01:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.907 13:01:20 -- scripts/common.sh@355 -- # echo 1 00:03:28.907 13:01:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.907 13:01:20 -- scripts/common.sh@366 -- # decimal 2 00:03:28.907 13:01:20 -- scripts/common.sh@353 -- # local d=2 00:03:28.907 13:01:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.907 13:01:20 -- scripts/common.sh@355 -- # echo 2 00:03:28.907 13:01:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.907 13:01:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.907 13:01:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.907 13:01:20 -- scripts/common.sh@368 -- # return 0 00:03:28.907 13:01:20 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.907 13:01:20 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.907 --rc genhtml_branch_coverage=1 00:03:28.907 --rc genhtml_function_coverage=1 00:03:28.907 --rc genhtml_legend=1 00:03:28.907 --rc geninfo_all_blocks=1 00:03:28.907 --rc geninfo_unexecuted_blocks=1 00:03:28.907 00:03:28.907 ' 00:03:28.907 13:01:20 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.907 --rc genhtml_branch_coverage=1 00:03:28.907 --rc genhtml_function_coverage=1 00:03:28.907 --rc genhtml_legend=1 00:03:28.907 --rc geninfo_all_blocks=1 00:03:28.907 --rc geninfo_unexecuted_blocks=1 00:03:28.907 00:03:28.907 ' 00:03:28.907 13:01:20 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.907 --rc genhtml_branch_coverage=1 00:03:28.907 --rc genhtml_function_coverage=1 00:03:28.907 --rc genhtml_legend=1 00:03:28.907 --rc geninfo_all_blocks=1 00:03:28.907 --rc geninfo_unexecuted_blocks=1 00:03:28.907 00:03:28.907 ' 00:03:28.907 13:01:20 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.907 --rc genhtml_branch_coverage=1 00:03:28.907 --rc genhtml_function_coverage=1 00:03:28.907 --rc genhtml_legend=1 00:03:28.907 --rc geninfo_all_blocks=1 00:03:28.907 --rc geninfo_unexecuted_blocks=1 00:03:28.907 00:03:28.907 ' 00:03:28.907 13:01:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.907 13:01:20 -- nvmf/common.sh@7 -- # uname -s 00:03:28.907 13:01:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.907 13:01:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.907 13:01:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.907 13:01:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.907 13:01:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.907 13:01:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.907 13:01:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.907 13:01:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.907 13:01:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.907 13:01:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.907 13:01:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a66a5b23-8ddc-4859-b95b-bc5833e58729 00:03:28.907 13:01:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=a66a5b23-8ddc-4859-b95b-bc5833e58729 00:03:28.907 13:01:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.907 13:01:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.907 13:01:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:28.907 13:01:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.907 13:01:20 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.907 13:01:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.907 13:01:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.907 13:01:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.907 13:01:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.907 13:01:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.907 13:01:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.907 13:01:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.907 13:01:20 -- paths/export.sh@5 -- # export PATH 00:03:29.167 13:01:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.167 13:01:20 -- nvmf/common.sh@51 -- # : 0 00:03:29.167 13:01:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:29.167 13:01:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:29.167 13:01:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:29.167 13:01:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:29.167 13:01:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:29.167 13:01:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:29.167 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:29.167 13:01:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:29.167 13:01:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:29.167 13:01:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:29.167 13:01:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:29.167 13:01:20 -- spdk/autotest.sh@32 -- # uname -s 00:03:29.167 13:01:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:29.167 13:01:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:29.167 13:01:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.167 13:01:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:29.167 13:01:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.167 13:01:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.167 13:01:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:29.167 13:01:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:29.167 13:01:20 -- spdk/autotest.sh@48 -- # udevadm_pid=56000 00:03:29.167 13:01:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:29.167 13:01:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:29.167 13:01:20 -- pm/common@17 -- # local monitor 00:03:29.167 13:01:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.167 13:01:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.167 13:01:20 -- pm/common@25 -- # sleep 1 00:03:29.167 13:01:20 -- pm/common@21 -- # date +%s 00:03:29.167 13:01:20 -- pm/common@21 -- # date +%s 00:03:29.167 13:01:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733922080 00:03:29.167 13:01:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733922080 00:03:29.167 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733922080_collect-cpu-load.pm.log 00:03:29.167 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733922080_collect-vmstat.pm.log 00:03:30.105 13:01:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.105 13:01:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.105 13:01:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.105 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:03:30.105 13:01:21 -- spdk/autotest.sh@59 -- # create_test_list 00:03:30.105 13:01:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:30.105 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:03:30.364 13:01:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:30.364 13:01:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:30.364 13:01:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:30.364 13:01:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:30.364 13:01:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.364 13:01:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:30.364 13:01:21 -- common/autotest_common.sh@1457 -- # uname 00:03:30.364 13:01:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:30.364 13:01:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:30.364 13:01:21 -- common/autotest_common.sh@1477 -- # uname 00:03:30.364 13:01:21 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:30.364 13:01:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:30.364 13:01:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:30.364 lcov: LCOV version 1.15 00:03:30.364 13:01:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.288 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:45.288 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:03.382 13:01:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:03.382 13:01:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.382 13:01:52 -- common/autotest_common.sh@10 -- # set +x 00:04:03.382 13:01:52 -- spdk/autotest.sh@78 -- # rm -f 00:04:03.382 13:01:52 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.382 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:03.382 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:03.382 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:03.382 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:03.382 13:01:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.382 13:01:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:03.382 13:01:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:03.382 13:01:53 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:03.382 13:01:53 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:03.382 13:01:53 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:03.382 13:01:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:03.382 13:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:03.382 13:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:03.382 13:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:03.382 13:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:03.382 13:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:03.382 13:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:03.382 13:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:03.382 13:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.382 13:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:03.382 13:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:03.382 13:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.382 13:01:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.382 13:01:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.382 13:01:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.382 13:01:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.382 13:01:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.382 13:01:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.382 No valid GPT data, bailing 00:04:03.382 13:01:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.382 13:01:53 -- scripts/common.sh@394 -- # pt= 00:04:03.382 13:01:53 -- scripts/common.sh@395 -- # return 1 00:04:03.382 13:01:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.382 1+0 records in 00:04:03.382 1+0 records out 00:04:03.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181211 s, 57.9 MB/s 00:04:03.382 13:01:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.382 13:01:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.382 13:01:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:03.382 13:01:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:03.382 13:01:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:03.382 No valid GPT data, bailing 00:04:03.382 13:01:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.382 13:01:53 -- scripts/common.sh@394 -- # pt= 00:04:03.382 13:01:53 -- scripts/common.sh@395 -- # return 1 00:04:03.382 13:01:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:03.382 1+0 records in 00:04:03.382 1+0 records out 00:04:03.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630376 s, 166 MB/s 00:04:03.382 13:01:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.382 13:01:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.382 13:01:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:03.382 13:01:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:03.382 13:01:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:03.382 No valid GPT data, bailing 00:04:03.382 13:01:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:03.382 13:01:53 -- scripts/common.sh@394 -- # pt= 00:04:03.382 13:01:53 -- scripts/common.sh@395 -- # return 1 00:04:03.382 13:01:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:03.382 1+0 records in 00:04:03.382 1+0 records out 00:04:03.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637474 s, 164 MB/s 00:04:03.382 13:01:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.382 13:01:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.382 13:01:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:03.382 13:01:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:03.382 13:01:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:03.382 No valid GPT data, bailing 00:04:03.382 13:01:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:03.382 13:01:54 -- scripts/common.sh@394 -- # pt= 00:04:03.382 13:01:54 -- scripts/common.sh@395 -- # return 1 00:04:03.382 13:01:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:03.383 1+0 records in 00:04:03.383 1+0 records out 00:04:03.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655491 s, 160 MB/s 00:04:03.383 13:01:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.383 13:01:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.383 13:01:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:03.383 13:01:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:03.383 13:01:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:03.383 No valid GPT data, bailing 00:04:03.383 13:01:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:03.383 13:01:54 -- scripts/common.sh@394 -- # pt= 00:04:03.383 13:01:54 -- scripts/common.sh@395 -- # return 1 00:04:03.383 13:01:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:03.383 1+0 records in 00:04:03.383 1+0 records out 00:04:03.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492839 s, 213 MB/s 00:04:03.383 13:01:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.383 13:01:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.383 13:01:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:03.383 13:01:54 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:03.383 13:01:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:03.383 No valid GPT data, bailing 00:04:03.383 13:01:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:03.383 13:01:54 -- scripts/common.sh@394 -- # pt= 00:04:03.383 13:01:54 -- scripts/common.sh@395 -- # return 1 00:04:03.383 13:01:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:03.383 1+0 records in 00:04:03.383 1+0 records out 00:04:03.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620785 s, 169 MB/s 00:04:03.383 13:01:54 -- spdk/autotest.sh@105 -- # sync 00:04:03.383 13:01:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.383 13:01:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.383 13:01:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:05.922 13:01:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:05.922 13:01:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:05.922 13:01:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:05.922 13:01:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.058 Hugepages 00:04:07.058 node hugesize free / total 00:04:07.058 node0 1048576kB 0 / 0 00:04:07.059 node0 2048kB 0 / 0 00:04:07.059 00:04:07.059 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.317 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:07.317 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:07.576 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:07.576 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:07.576 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:07.576 13:01:59 -- spdk/autotest.sh@117 -- # uname -s 00:04:07.576 13:01:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:07.576 13:01:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:07.576 13:01:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.118 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.118 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.118 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.377 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.377 13:02:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:10.319 13:02:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:10.319 13:02:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:10.319 13:02:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.319 13:02:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:10.319 13:02:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.319 13:02:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.319 13:02:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.319 13:02:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.319 13:02:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.577 13:02:01 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:10.577 13:02:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:10.577 13:02:01 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.402 Waiting for block devices as requested 00:04:11.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.660 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.660 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.933 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:16.933 13:02:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.933 13:02:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:16.933 13:02:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.933 13:02:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:16.933 13:02:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:16.933 13:02:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:16.933 13:02:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:16.933 13:02:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:16.933 13:02:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:16.933 13:02:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:16.933 13:02:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.934 13:02:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1543 -- # continue 00:04:16.934 13:02:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.934 13:02:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.934 13:02:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1543 -- # continue 00:04:16.934 13:02:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.934 13:02:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.934 13:02:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1543 -- # continue 00:04:16.934 13:02:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.934 13:02:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:16.934 13:02:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.934 13:02:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.934 13:02:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.934 13:02:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.934 13:02:08 -- common/autotest_common.sh@1543 -- # continue 00:04:16.934 13:02:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:16.934 13:02:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.934 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:17.193 13:02:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:17.193 13:02:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.193 13:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:17.193 13:02:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.700 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.700 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.700 13:02:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:18.700 13:02:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.700 13:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.700 13:02:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:18.700 13:02:10 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:18.700 13:02:10 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.700 13:02:10 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:18.700 13:02:10 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:18.700 13:02:10 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:18.700 13:02:10 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:18.700 13:02:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:18.700 13:02:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:18.700 13:02:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:18.700 13:02:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.959 13:02:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.959 13:02:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:18.959 13:02:10 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:18.959 13:02:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:18.959 13:02:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.959 13:02:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.959 13:02:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.959 13:02:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.959 13:02:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.959 13:02:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.959 13:02:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:18.959 13:02:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:18.959 13:02:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.959 13:02:10 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:18.959 13:02:10 -- common/autotest_common.sh@1572 -- # return 0 00:04:18.959 13:02:10 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:18.959 13:02:10 -- common/autotest_common.sh@1580 -- # return 0 00:04:18.959 13:02:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:18.959 13:02:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:18.959 13:02:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:18.959 13:02:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:18.959 13:02:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:18.959 13:02:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.959 13:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.959 13:02:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:18.959 13:02:10 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.959 13:02:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.959 13:02:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.959 13:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.959 ************************************ 00:04:18.959 START TEST env 00:04:18.959 ************************************ 00:04:18.959 13:02:10 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.219 * Looking for test storage... 00:04:19.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.219 13:02:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.219 13:02:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.219 13:02:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.219 13:02:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.219 13:02:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.219 13:02:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.219 13:02:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.219 13:02:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.219 13:02:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.219 13:02:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.219 13:02:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.219 13:02:10 env -- scripts/common.sh@344 -- # case "$op" in 00:04:19.219 13:02:10 env -- scripts/common.sh@345 -- # : 1 00:04:19.219 13:02:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.219 13:02:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.219 13:02:10 env -- scripts/common.sh@365 -- # decimal 1 00:04:19.219 13:02:10 env -- scripts/common.sh@353 -- # local d=1 00:04:19.219 13:02:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.219 13:02:10 env -- scripts/common.sh@355 -- # echo 1 00:04:19.219 13:02:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.219 13:02:10 env -- scripts/common.sh@366 -- # decimal 2 00:04:19.219 13:02:10 env -- scripts/common.sh@353 -- # local d=2 00:04:19.219 13:02:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.219 13:02:10 env -- scripts/common.sh@355 -- # echo 2 00:04:19.219 13:02:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.219 13:02:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.219 13:02:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.219 13:02:10 env -- scripts/common.sh@368 -- # return 0 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.219 --rc genhtml_branch_coverage=1 00:04:19.219 --rc genhtml_function_coverage=1 00:04:19.219 --rc genhtml_legend=1 00:04:19.219 --rc geninfo_all_blocks=1 00:04:19.219 --rc geninfo_unexecuted_blocks=1 00:04:19.219 00:04:19.219 ' 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.219 --rc genhtml_branch_coverage=1 00:04:19.219 --rc genhtml_function_coverage=1 00:04:19.219 --rc genhtml_legend=1 00:04:19.219 --rc geninfo_all_blocks=1 00:04:19.219 --rc geninfo_unexecuted_blocks=1 00:04:19.219 00:04:19.219 ' 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.219 --rc genhtml_branch_coverage=1 00:04:19.219 --rc genhtml_function_coverage=1 00:04:19.219 --rc genhtml_legend=1 00:04:19.219 --rc geninfo_all_blocks=1 00:04:19.219 --rc geninfo_unexecuted_blocks=1 00:04:19.219 00:04:19.219 ' 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.219 --rc genhtml_branch_coverage=1 00:04:19.219 --rc genhtml_function_coverage=1 00:04:19.219 --rc genhtml_legend=1 00:04:19.219 --rc geninfo_all_blocks=1 00:04:19.219 --rc geninfo_unexecuted_blocks=1 00:04:19.219 00:04:19.219 ' 00:04:19.219 13:02:10 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.219 13:02:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.219 13:02:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.219 ************************************ 00:04:19.219 START TEST env_memory 00:04:19.219 ************************************ 00:04:19.219 13:02:10 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:19.219 00:04:19.219 00:04:19.219 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.219 http://cunit.sourceforge.net/ 00:04:19.219 00:04:19.219 00:04:19.219 Suite: memory 00:04:19.220 Test: alloc and free memory map ...[2024-12-11 13:02:10.760105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:19.479 passed 00:04:19.479 Test: mem map translation ...[2024-12-11 13:02:10.804859] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:19.479 [2024-12-11 13:02:10.804912] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:19.479 [2024-12-11 13:02:10.804981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:19.479 [2024-12-11 13:02:10.805005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:19.479 passed 00:04:19.479 Test: mem map registration ...[2024-12-11 13:02:10.873105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:19.479 [2024-12-11 13:02:10.873162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:19.479 passed 00:04:19.479 Test: mem map adjacent registrations ...passed 00:04:19.479 00:04:19.479 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.479 suites 1 1 n/a 0 0 00:04:19.479 tests 4 4 4 0 0 00:04:19.479 asserts 152 152 152 0 n/a 00:04:19.479 00:04:19.479 Elapsed time = 0.242 seconds 00:04:19.479 00:04:19.479 real 0m0.299s 00:04:19.479 user 0m0.255s 00:04:19.479 sys 0m0.033s 00:04:19.479 13:02:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.479 13:02:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:19.479 ************************************ 00:04:19.479 END TEST env_memory 00:04:19.479 ************************************ 00:04:19.739 13:02:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.739 13:02:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.739 13:02:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.739 13:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.739 ************************************ 00:04:19.739 START TEST env_vtophys 00:04:19.739 ************************************ 00:04:19.739 13:02:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.739 EAL: lib.eal log level changed from notice to debug 00:04:19.739 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 1 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 2 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 3 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 4 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 5 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 6 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 7 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 8 as core 0 on socket 0 00:04:19.739 EAL: Detected lcore 9 as core 0 on socket 0 00:04:19.739 EAL: Maximum logical cores by configuration: 128 00:04:19.739 EAL: Detected CPU lcores: 10 00:04:19.739 EAL: Detected NUMA nodes: 1 00:04:19.739 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:19.739 EAL: Detected shared linkage of DPDK 00:04:19.739 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.739 EAL: Selected IOVA mode 'PA' 00:04:19.739 EAL: Probing VFIO support... 00:04:19.739 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.739 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:19.739 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.739 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.739 EAL: Setting up physically contiguous memory... 00:04:19.739 EAL: Setting maximum number of open files to 524288 00:04:19.739 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.739 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.739 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.739 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.739 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.739 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.739 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.739 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.739 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.739 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.739 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.739 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.739 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.739 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.739 EAL: Hugepages will be freed exactly as allocated. 00:04:19.739 EAL: No shared files mode enabled, IPC is disabled 00:04:19.739 EAL: No shared files mode enabled, IPC is disabled 00:04:19.739 EAL: TSC frequency is ~2490000 KHz 00:04:19.739 EAL: Main lcore 0 is ready (tid=7f6b82ccba40;cpuset=[0]) 00:04:19.739 EAL: Trying to obtain current memory policy. 00:04:19.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.739 EAL: Restoring previous memory policy: 0 00:04:19.739 EAL: request: mp_malloc_sync 00:04:19.739 EAL: No shared files mode enabled, IPC is disabled 00:04:19.739 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.739 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.739 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.739 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.739 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:19.739 00:04:19.739 00:04:19.739 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.739 http://cunit.sourceforge.net/ 00:04:19.739 00:04:19.739 00:04:19.739 Suite: components_suite 00:04:20.308 Test: vtophys_malloc_test ...passed 00:04:20.308 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.308 EAL: Restoring previous memory policy: 4 00:04:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.308 EAL: request: mp_malloc_sync 00:04:20.308 EAL: No shared files mode enabled, IPC is disabled 00:04:20.308 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.308 EAL: request: mp_malloc_sync 00:04:20.308 EAL: No shared files mode enabled, IPC is disabled 00:04:20.308 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.308 EAL: Trying to obtain current memory policy. 00:04:20.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.308 EAL: Restoring previous memory policy: 4 00:04:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.308 EAL: request: mp_malloc_sync 00:04:20.308 EAL: No shared files mode enabled, IPC is disabled 00:04:20.308 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.308 EAL: request: mp_malloc_sync 00:04:20.308 EAL: No shared files mode enabled, IPC is disabled 00:04:20.308 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.308 EAL: Trying to obtain current memory policy. 00:04:20.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.308 EAL: Restoring previous memory policy: 4 00:04:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.308 EAL: request: mp_malloc_sync 00:04:20.308 EAL: No shared files mode enabled, IPC is disabled 00:04:20.308 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.308 EAL: request: mp_malloc_sync 00:04:20.308 EAL: No shared files mode enabled, IPC is disabled 00:04:20.308 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.567 EAL: Trying to obtain current memory policy. 00:04:20.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.567 EAL: Restoring previous memory policy: 4 00:04:20.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.567 EAL: request: mp_malloc_sync 00:04:20.567 EAL: No shared files mode enabled, IPC is disabled 00:04:20.567 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.567 EAL: request: mp_malloc_sync 00:04:20.567 EAL: No shared files mode enabled, IPC is disabled 00:04:20.567 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.567 EAL: Trying to obtain current memory policy. 00:04:20.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.567 EAL: Restoring previous memory policy: 4 00:04:20.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.567 EAL: request: mp_malloc_sync 00:04:20.567 EAL: No shared files mode enabled, IPC is disabled 00:04:20.567 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.567 EAL: request: mp_malloc_sync 00:04:20.567 EAL: No shared files mode enabled, IPC is disabled 00:04:20.567 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.567 EAL: Trying to obtain current memory policy. 00:04:20.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.567 EAL: Restoring previous memory policy: 4 00:04:20.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.567 EAL: request: mp_malloc_sync 00:04:20.567 EAL: No shared files mode enabled, IPC is disabled 00:04:20.567 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.826 EAL: request: mp_malloc_sync 00:04:20.826 EAL: No shared files mode enabled, IPC is disabled 00:04:20.826 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.826 EAL: Trying to obtain current memory policy. 00:04:20.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.826 EAL: Restoring previous memory policy: 4 00:04:20.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.826 EAL: request: mp_malloc_sync 00:04:20.826 EAL: No shared files mode enabled, IPC is disabled 00:04:20.826 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.376 EAL: request: mp_malloc_sync 00:04:21.376 EAL: No shared files mode enabled, IPC is disabled 00:04:21.376 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.376 EAL: Trying to obtain current memory policy. 00:04:21.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.636 EAL: Restoring previous memory policy: 4 00:04:21.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.636 EAL: request: mp_malloc_sync 00:04:21.636 EAL: No shared files mode enabled, IPC is disabled 00:04:21.636 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.895 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.154 EAL: request: mp_malloc_sync 00:04:22.154 EAL: No shared files mode enabled, IPC is disabled 00:04:22.154 EAL: Heap on socket 0 was shrunk by 258MB 00:04:22.413 EAL: Trying to obtain current memory policy. 00:04:22.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.672 EAL: Restoring previous memory policy: 4 00:04:22.672 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.672 EAL: request: mp_malloc_sync 00:04:22.672 EAL: No shared files mode enabled, IPC is disabled 00:04:22.672 EAL: Heap on socket 0 was expanded by 514MB 00:04:23.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.869 EAL: request: mp_malloc_sync 00:04:23.869 EAL: No shared files mode enabled, IPC is disabled 00:04:23.869 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.804 EAL: Trying to obtain current memory policy. 00:04:24.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.063 EAL: Restoring previous memory policy: 4 00:04:25.063 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.063 EAL: request: mp_malloc_sync 00:04:25.063 EAL: No shared files mode enabled, IPC is disabled 00:04:25.063 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.968 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.228 EAL: request: mp_malloc_sync 00:04:27.228 EAL: No shared files mode enabled, IPC is disabled 00:04:27.228 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.134 passed 00:04:29.134 00:04:29.134 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.134 suites 1 1 n/a 0 0 00:04:29.134 tests 2 2 2 0 0 00:04:29.134 asserts 5705 5705 5705 0 n/a 00:04:29.134 00:04:29.134 Elapsed time = 9.074 seconds 00:04:29.134 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.134 EAL: request: mp_malloc_sync 00:04:29.134 EAL: No shared files mode enabled, IPC is disabled 00:04:29.134 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.134 EAL: No shared files mode enabled, IPC is disabled 00:04:29.134 EAL: No shared files mode enabled, IPC is disabled 00:04:29.134 EAL: No shared files mode enabled, IPC is disabled 00:04:29.134 00:04:29.134 real 0m9.420s 00:04:29.134 user 0m7.977s 00:04:29.134 sys 0m1.278s 00:04:29.134 13:02:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.134 13:02:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.134 ************************************ 00:04:29.134 END TEST env_vtophys 00:04:29.134 ************************************ 00:04:29.134 13:02:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.134 13:02:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.134 13:02:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.134 13:02:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.134 ************************************ 00:04:29.134 START TEST env_pci 00:04:29.134 ************************************ 00:04:29.134 13:02:20 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.134 00:04:29.134 00:04:29.134 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.134 http://cunit.sourceforge.net/ 00:04:29.134 00:04:29.134 00:04:29.134 Suite: pci 00:04:29.134 Test: pci_hook ...[2024-12-11 13:02:20.601074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58853 has claimed it 00:04:29.134 passed 00:04:29.134 00:04:29.134 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.134 suites 1 1 n/a 0 0 00:04:29.134 tests 1 1 1 0 0 00:04:29.134 asserts 25 25 25 0 n/a 00:04:29.134 00:04:29.134 Elapsed time = 0.008 seconds 00:04:29.134 EAL: Cannot find device (10000:00:01.0) 00:04:29.134 EAL: Failed to attach device on primary process 00:04:29.134 00:04:29.134 real 0m0.110s 00:04:29.134 user 0m0.040s 00:04:29.134 sys 0m0.069s 00:04:29.134 13:02:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.134 13:02:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.134 ************************************ 00:04:29.134 END TEST env_pci 00:04:29.134 ************************************ 00:04:29.393 13:02:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.394 13:02:20 env -- env/env.sh@15 -- # uname 00:04:29.394 13:02:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.394 13:02:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.394 13:02:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.394 13:02:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:29.394 13:02:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.394 13:02:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.394 ************************************ 00:04:29.394 START TEST env_dpdk_post_init 00:04:29.394 ************************************ 00:04:29.394 13:02:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.394 EAL: Detected CPU lcores: 10 00:04:29.394 EAL: Detected NUMA nodes: 1 00:04:29.394 EAL: Detected shared linkage of DPDK 00:04:29.394 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.394 EAL: Selected IOVA mode 'PA' 00:04:29.394 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.653 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:29.653 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:29.653 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:29.653 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:29.653 Starting DPDK initialization... 00:04:29.653 Starting SPDK post initialization... 00:04:29.653 SPDK NVMe probe 00:04:29.653 Attaching to 0000:00:10.0 00:04:29.653 Attaching to 0000:00:11.0 00:04:29.653 Attaching to 0000:00:12.0 00:04:29.653 Attaching to 0000:00:13.0 00:04:29.653 Attached to 0000:00:10.0 00:04:29.653 Attached to 0000:00:11.0 00:04:29.653 Attached to 0000:00:13.0 00:04:29.653 Attached to 0000:00:12.0 00:04:29.653 Cleaning up... 00:04:29.653 00:04:29.653 real 0m0.317s 00:04:29.653 user 0m0.102s 00:04:29.653 sys 0m0.117s 00:04:29.653 13:02:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.653 ************************************ 00:04:29.653 END TEST env_dpdk_post_init 00:04:29.653 13:02:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.653 ************************************ 00:04:29.653 13:02:21 env -- env/env.sh@26 -- # uname 00:04:29.653 13:02:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.653 13:02:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.653 13:02:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.653 13:02:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.653 13:02:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.653 ************************************ 00:04:29.653 START TEST env_mem_callbacks 00:04:29.653 ************************************ 00:04:29.653 13:02:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.653 EAL: Detected CPU lcores: 10 00:04:29.653 EAL: Detected NUMA nodes: 1 00:04:29.653 EAL: Detected shared linkage of DPDK 00:04:29.912 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.912 EAL: Selected IOVA mode 'PA' 00:04:29.912 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.912 00:04:29.912 00:04:29.912 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.912 http://cunit.sourceforge.net/ 00:04:29.912 00:04:29.912 00:04:29.912 Suite: memory 00:04:29.912 Test: test ... 00:04:29.912 register 0x200000200000 2097152 00:04:29.912 malloc 3145728 00:04:29.912 register 0x200000400000 4194304 00:04:29.912 buf 0x2000004fffc0 len 3145728 PASSED 00:04:29.912 malloc 64 00:04:29.912 buf 0x2000004ffec0 len 64 PASSED 00:04:29.912 malloc 4194304 00:04:29.912 register 0x200000800000 6291456 00:04:29.912 buf 0x2000009fffc0 len 4194304 PASSED 00:04:29.912 free 0x2000004fffc0 3145728 00:04:29.912 free 0x2000004ffec0 64 00:04:29.912 unregister 0x200000400000 4194304 PASSED 00:04:29.912 free 0x2000009fffc0 4194304 00:04:29.912 unregister 0x200000800000 6291456 PASSED 00:04:29.912 malloc 8388608 00:04:29.912 register 0x200000400000 10485760 00:04:29.912 buf 0x2000005fffc0 len 8388608 PASSED 00:04:29.912 free 0x2000005fffc0 8388608 00:04:29.912 unregister 0x200000400000 10485760 PASSED 00:04:29.912 passed 00:04:29.912 00:04:29.912 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.912 suites 1 1 n/a 0 0 00:04:29.912 tests 1 1 1 0 0 00:04:29.912 asserts 15 15 15 0 n/a 00:04:29.912 00:04:29.912 Elapsed time = 0.077 seconds 00:04:29.912 00:04:29.912 real 0m0.281s 00:04:29.912 user 0m0.105s 00:04:29.912 sys 0m0.074s 00:04:29.912 13:02:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.912 13:02:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:29.912 ************************************ 00:04:29.912 END TEST env_mem_callbacks 00:04:29.912 ************************************ 00:04:30.172 00:04:30.172 real 0m11.042s 00:04:30.172 user 0m8.722s 00:04:30.172 sys 0m1.940s 00:04:30.172 13:02:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.172 13:02:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.172 ************************************ 00:04:30.172 END TEST env 00:04:30.172 ************************************ 00:04:30.172 13:02:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.172 13:02:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.172 13:02:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.172 13:02:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.172 ************************************ 00:04:30.172 START TEST rpc 00:04:30.172 ************************************ 00:04:30.172 13:02:21 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.172 * Looking for test storage... 00:04:30.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.172 13:02:21 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.172 13:02:21 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.172 13:02:21 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.432 13:02:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.432 13:02:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.432 13:02:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.432 13:02:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.432 13:02:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.432 13:02:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.432 13:02:21 rpc -- scripts/common.sh@345 -- # : 1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.432 13:02:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.432 13:02:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.432 13:02:21 rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.432 13:02:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.432 13:02:21 rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.432 13:02:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.432 13:02:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.432 13:02:21 rpc -- scripts/common.sh@368 -- # return 0 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.432 --rc genhtml_branch_coverage=1 00:04:30.432 --rc genhtml_function_coverage=1 00:04:30.432 --rc genhtml_legend=1 00:04:30.432 --rc geninfo_all_blocks=1 00:04:30.432 --rc geninfo_unexecuted_blocks=1 00:04:30.432 00:04:30.432 ' 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.432 --rc genhtml_branch_coverage=1 00:04:30.432 --rc genhtml_function_coverage=1 00:04:30.432 --rc genhtml_legend=1 00:04:30.432 --rc geninfo_all_blocks=1 00:04:30.432 --rc geninfo_unexecuted_blocks=1 00:04:30.432 00:04:30.432 ' 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.432 --rc genhtml_branch_coverage=1 00:04:30.432 --rc genhtml_function_coverage=1 00:04:30.432 --rc genhtml_legend=1 00:04:30.432 --rc geninfo_all_blocks=1 00:04:30.432 --rc geninfo_unexecuted_blocks=1 00:04:30.432 00:04:30.432 ' 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.432 --rc genhtml_branch_coverage=1 00:04:30.432 --rc genhtml_function_coverage=1 00:04:30.432 --rc genhtml_legend=1 00:04:30.432 --rc geninfo_all_blocks=1 00:04:30.432 --rc geninfo_unexecuted_blocks=1 00:04:30.432 00:04:30.432 ' 00:04:30.432 13:02:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58985 00:04:30.432 13:02:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:30.432 13:02:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.432 13:02:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58985 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@835 -- # '[' -z 58985 ']' 00:04:30.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.432 13:02:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.432 [2024-12-11 13:02:21.891166] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:04:30.432 [2024-12-11 13:02:21.891499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:04:30.692 [2024-12-11 13:02:22.077661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.692 [2024-12-11 13:02:22.223219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.692 [2024-12-11 13:02:22.223292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58985' to capture a snapshot of events at runtime. 00:04:30.692 [2024-12-11 13:02:22.223306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.692 [2024-12-11 13:02:22.223321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.692 [2024-12-11 13:02:22.223332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58985 for offline analysis/debug. 00:04:30.692 [2024-12-11 13:02:22.224819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.103 13:02:23 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.103 13:02:23 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.103 13:02:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.103 13:02:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.103 13:02:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:32.103 13:02:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:32.103 13:02:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.103 13:02:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.103 13:02:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.103 ************************************ 00:04:32.103 START TEST rpc_integrity 00:04:32.103 ************************************ 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.103 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.103 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.103 { 00:04:32.103 "name": "Malloc0", 00:04:32.103 "aliases": [ 00:04:32.103 "6369f989-8922-4bdb-985d-c32905663cb9" 00:04:32.103 ], 00:04:32.103 "product_name": "Malloc disk", 00:04:32.103 "block_size": 512, 00:04:32.103 "num_blocks": 16384, 00:04:32.103 "uuid": "6369f989-8922-4bdb-985d-c32905663cb9", 00:04:32.103 "assigned_rate_limits": { 00:04:32.103 "rw_ios_per_sec": 0, 00:04:32.103 "rw_mbytes_per_sec": 0, 00:04:32.103 "r_mbytes_per_sec": 0, 00:04:32.103 "w_mbytes_per_sec": 0 00:04:32.103 }, 00:04:32.103 "claimed": false, 00:04:32.103 "zoned": false, 00:04:32.103 "supported_io_types": { 00:04:32.103 "read": true, 00:04:32.103 "write": true, 00:04:32.103 "unmap": true, 00:04:32.103 "flush": true, 00:04:32.103 "reset": true, 00:04:32.103 "nvme_admin": false, 00:04:32.103 "nvme_io": false, 00:04:32.103 "nvme_io_md": false, 00:04:32.103 "write_zeroes": true, 00:04:32.103 "zcopy": true, 00:04:32.103 "get_zone_info": false, 00:04:32.103 "zone_management": false, 00:04:32.103 "zone_append": false, 00:04:32.103 "compare": false, 00:04:32.103 "compare_and_write": false, 00:04:32.103 "abort": true, 00:04:32.103 "seek_hole": false, 00:04:32.103 "seek_data": false, 00:04:32.103 "copy": true, 00:04:32.103 "nvme_iov_md": false 00:04:32.103 }, 00:04:32.103 "memory_domains": [ 00:04:32.103 { 00:04:32.103 "dma_device_id": "system", 00:04:32.103 "dma_device_type": 1 00:04:32.103 }, 00:04:32.103 { 00:04:32.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.104 "dma_device_type": 2 00:04:32.104 } 00:04:32.104 ], 00:04:32.104 "driver_specific": {} 00:04:32.104 } 00:04:32.104 ]' 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 [2024-12-11 13:02:23.487669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:32.104 [2024-12-11 13:02:23.487858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.104 [2024-12-11 13:02:23.487907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:32.104 [2024-12-11 13:02:23.487924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.104 [2024-12-11 13:02:23.491032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.104 [2024-12-11 13:02:23.491215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.104 Passthru0 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.104 { 00:04:32.104 "name": "Malloc0", 00:04:32.104 "aliases": [ 00:04:32.104 "6369f989-8922-4bdb-985d-c32905663cb9" 00:04:32.104 ], 00:04:32.104 "product_name": "Malloc disk", 00:04:32.104 "block_size": 512, 00:04:32.104 "num_blocks": 16384, 00:04:32.104 "uuid": "6369f989-8922-4bdb-985d-c32905663cb9", 00:04:32.104 "assigned_rate_limits": { 00:04:32.104 "rw_ios_per_sec": 0, 00:04:32.104 "rw_mbytes_per_sec": 0, 00:04:32.104 "r_mbytes_per_sec": 0, 00:04:32.104 "w_mbytes_per_sec": 0 00:04:32.104 }, 00:04:32.104 "claimed": true, 00:04:32.104 "claim_type": "exclusive_write", 00:04:32.104 "zoned": false, 00:04:32.104 "supported_io_types": { 00:04:32.104 "read": true, 00:04:32.104 "write": true, 00:04:32.104 "unmap": true, 00:04:32.104 "flush": true, 00:04:32.104 "reset": true, 00:04:32.104 "nvme_admin": false, 00:04:32.104 "nvme_io": false, 00:04:32.104 "nvme_io_md": false, 00:04:32.104 "write_zeroes": true, 00:04:32.104 "zcopy": true, 00:04:32.104 "get_zone_info": false, 00:04:32.104 "zone_management": false, 00:04:32.104 "zone_append": false, 00:04:32.104 "compare": false, 00:04:32.104 "compare_and_write": false, 00:04:32.104 "abort": true, 00:04:32.104 "seek_hole": false, 00:04:32.104 "seek_data": false, 00:04:32.104 "copy": true, 00:04:32.104 "nvme_iov_md": false 00:04:32.104 }, 00:04:32.104 "memory_domains": [ 00:04:32.104 { 00:04:32.104 "dma_device_id": "system", 00:04:32.104 "dma_device_type": 1 00:04:32.104 }, 00:04:32.104 { 00:04:32.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.104 "dma_device_type": 2 00:04:32.104 } 00:04:32.104 ], 00:04:32.104 "driver_specific": {} 00:04:32.104 }, 00:04:32.104 { 00:04:32.104 "name": "Passthru0", 00:04:32.104 "aliases": [ 00:04:32.104 "b0a2a788-54df-5686-a944-837347efb5e0" 00:04:32.104 ], 00:04:32.104 "product_name": "passthru", 00:04:32.104 "block_size": 512, 00:04:32.104 "num_blocks": 16384, 00:04:32.104 "uuid": "b0a2a788-54df-5686-a944-837347efb5e0", 00:04:32.104 "assigned_rate_limits": { 00:04:32.104 "rw_ios_per_sec": 0, 00:04:32.104 "rw_mbytes_per_sec": 0, 00:04:32.104 "r_mbytes_per_sec": 0, 00:04:32.104 "w_mbytes_per_sec": 0 00:04:32.104 }, 00:04:32.104 "claimed": false, 00:04:32.104 "zoned": false, 00:04:32.104 "supported_io_types": { 00:04:32.104 "read": true, 00:04:32.104 "write": true, 00:04:32.104 "unmap": true, 00:04:32.104 "flush": true, 00:04:32.104 "reset": true, 00:04:32.104 "nvme_admin": false, 00:04:32.104 "nvme_io": false, 00:04:32.104 "nvme_io_md": false, 00:04:32.104 "write_zeroes": true, 00:04:32.104 "zcopy": true, 00:04:32.104 "get_zone_info": false, 00:04:32.104 "zone_management": false, 00:04:32.104 "zone_append": false, 00:04:32.104 "compare": false, 00:04:32.104 "compare_and_write": false, 00:04:32.104 "abort": true, 00:04:32.104 "seek_hole": false, 00:04:32.104 "seek_data": false, 00:04:32.104 "copy": true, 00:04:32.104 "nvme_iov_md": false 00:04:32.104 }, 00:04:32.104 "memory_domains": [ 00:04:32.104 { 00:04:32.104 "dma_device_id": "system", 00:04:32.104 "dma_device_type": 1 00:04:32.104 }, 00:04:32.104 { 00:04:32.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.104 "dma_device_type": 2 00:04:32.104 } 00:04:32.104 ], 00:04:32.104 "driver_specific": { 00:04:32.104 "passthru": { 00:04:32.104 "name": "Passthru0", 00:04:32.104 "base_bdev_name": "Malloc0" 00:04:32.104 } 00:04:32.104 } 00:04:32.104 } 00:04:32.104 ]' 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.104 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.375 ************************************ 00:04:32.375 END TEST rpc_integrity 00:04:32.375 ************************************ 00:04:32.375 13:02:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.375 00:04:32.375 real 0m0.371s 00:04:32.375 user 0m0.185s 00:04:32.375 sys 0m0.067s 00:04:32.375 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.375 13:02:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.375 13:02:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:32.375 13:02:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.375 13:02:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.375 13:02:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.375 ************************************ 00:04:32.375 START TEST rpc_plugins 00:04:32.375 ************************************ 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:32.375 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.375 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.375 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.375 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.375 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.375 { 00:04:32.375 "name": "Malloc1", 00:04:32.375 "aliases": [ 00:04:32.375 "9abd4ba4-4e9a-481e-b285-4f1db6b24b39" 00:04:32.375 ], 00:04:32.375 "product_name": "Malloc disk", 00:04:32.375 "block_size": 4096, 00:04:32.375 "num_blocks": 256, 00:04:32.375 "uuid": "9abd4ba4-4e9a-481e-b285-4f1db6b24b39", 00:04:32.375 "assigned_rate_limits": { 00:04:32.375 "rw_ios_per_sec": 0, 00:04:32.375 "rw_mbytes_per_sec": 0, 00:04:32.375 "r_mbytes_per_sec": 0, 00:04:32.375 "w_mbytes_per_sec": 0 00:04:32.375 }, 00:04:32.375 "claimed": false, 00:04:32.375 "zoned": false, 00:04:32.375 "supported_io_types": { 00:04:32.376 "read": true, 00:04:32.376 "write": true, 00:04:32.376 "unmap": true, 00:04:32.376 "flush": true, 00:04:32.376 "reset": true, 00:04:32.376 "nvme_admin": false, 00:04:32.376 "nvme_io": false, 00:04:32.376 "nvme_io_md": false, 00:04:32.376 "write_zeroes": true, 00:04:32.376 "zcopy": true, 00:04:32.376 "get_zone_info": false, 00:04:32.376 "zone_management": false, 00:04:32.376 "zone_append": false, 00:04:32.376 "compare": false, 00:04:32.376 "compare_and_write": false, 00:04:32.376 "abort": true, 00:04:32.376 "seek_hole": false, 00:04:32.376 "seek_data": false, 00:04:32.376 "copy": true, 00:04:32.376 "nvme_iov_md": false 00:04:32.376 }, 00:04:32.376 "memory_domains": [ 00:04:32.376 { 00:04:32.376 "dma_device_id": "system", 00:04:32.376 "dma_device_type": 1 00:04:32.376 }, 00:04:32.376 { 00:04:32.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.376 "dma_device_type": 2 00:04:32.376 } 00:04:32.376 ], 00:04:32.376 "driver_specific": {} 00:04:32.376 } 00:04:32.376 ]' 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.376 ************************************ 00:04:32.376 END TEST rpc_plugins 00:04:32.376 ************************************ 00:04:32.376 13:02:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.376 00:04:32.376 real 0m0.164s 00:04:32.376 user 0m0.092s 00:04:32.376 sys 0m0.030s 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.376 13:02:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.635 13:02:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.635 13:02:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.635 13:02:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.635 13:02:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.635 ************************************ 00:04:32.635 START TEST rpc_trace_cmd_test 00:04:32.635 ************************************ 00:04:32.635 13:02:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:32.635 13:02:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.635 13:02:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.635 13:02:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.635 13:02:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.635 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58985", 00:04:32.635 "tpoint_group_mask": "0x8", 00:04:32.635 "iscsi_conn": { 00:04:32.635 "mask": "0x2", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "scsi": { 00:04:32.635 "mask": "0x4", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "bdev": { 00:04:32.635 "mask": "0x8", 00:04:32.635 "tpoint_mask": "0xffffffffffffffff" 00:04:32.635 }, 00:04:32.635 "nvmf_rdma": { 00:04:32.635 "mask": "0x10", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "nvmf_tcp": { 00:04:32.635 "mask": "0x20", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "ftl": { 00:04:32.635 "mask": "0x40", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "blobfs": { 00:04:32.635 "mask": "0x80", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "dsa": { 00:04:32.635 "mask": "0x200", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "thread": { 00:04:32.635 "mask": "0x400", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "nvme_pcie": { 00:04:32.635 "mask": "0x800", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "iaa": { 00:04:32.635 "mask": "0x1000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "nvme_tcp": { 00:04:32.635 "mask": "0x2000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "bdev_nvme": { 00:04:32.635 "mask": "0x4000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "sock": { 00:04:32.635 "mask": "0x8000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "blob": { 00:04:32.635 "mask": "0x10000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "bdev_raid": { 00:04:32.635 "mask": "0x20000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 }, 00:04:32.635 "scheduler": { 00:04:32.635 "mask": "0x40000", 00:04:32.635 "tpoint_mask": "0x0" 00:04:32.635 } 00:04:32.635 }' 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.635 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.894 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.894 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.894 ************************************ 00:04:32.894 END TEST rpc_trace_cmd_test 00:04:32.894 ************************************ 00:04:32.894 13:02:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.894 00:04:32.894 real 0m0.256s 00:04:32.894 user 0m0.192s 00:04:32.894 sys 0m0.054s 00:04:32.894 13:02:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.894 13:02:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 13:02:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.894 13:02:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.894 13:02:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.894 13:02:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.894 13:02:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.894 13:02:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 ************************************ 00:04:32.894 START TEST rpc_daemon_integrity 00:04:32.894 ************************************ 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.894 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.894 { 00:04:32.894 "name": "Malloc2", 00:04:32.894 "aliases": [ 00:04:32.894 "69c5651b-5ae9-4a74-8e9c-60b1de0003f4" 00:04:32.894 ], 00:04:32.894 "product_name": "Malloc disk", 00:04:32.894 "block_size": 512, 00:04:32.894 "num_blocks": 16384, 00:04:32.894 "uuid": "69c5651b-5ae9-4a74-8e9c-60b1de0003f4", 00:04:32.894 "assigned_rate_limits": { 00:04:32.894 "rw_ios_per_sec": 0, 00:04:32.894 "rw_mbytes_per_sec": 0, 00:04:32.894 "r_mbytes_per_sec": 0, 00:04:32.894 "w_mbytes_per_sec": 0 00:04:32.894 }, 00:04:32.894 "claimed": false, 00:04:32.894 "zoned": false, 00:04:32.894 "supported_io_types": { 00:04:32.894 "read": true, 00:04:32.894 "write": true, 00:04:32.894 "unmap": true, 00:04:32.894 "flush": true, 00:04:32.894 "reset": true, 00:04:32.895 "nvme_admin": false, 00:04:32.895 "nvme_io": false, 00:04:32.895 "nvme_io_md": false, 00:04:32.895 "write_zeroes": true, 00:04:32.895 "zcopy": true, 00:04:32.895 "get_zone_info": false, 00:04:32.895 "zone_management": false, 00:04:32.895 "zone_append": false, 00:04:32.895 "compare": false, 00:04:32.895 "compare_and_write": false, 00:04:32.895 "abort": true, 00:04:32.895 "seek_hole": false, 00:04:32.895 "seek_data": false, 00:04:32.895 "copy": true, 00:04:32.895 "nvme_iov_md": false 00:04:32.895 }, 00:04:32.895 "memory_domains": [ 00:04:32.895 { 00:04:32.895 "dma_device_id": "system", 00:04:32.895 "dma_device_type": 1 00:04:32.895 }, 00:04:32.895 { 00:04:32.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.895 "dma_device_type": 2 00:04:32.895 } 00:04:32.895 ], 00:04:32.895 "driver_specific": {} 00:04:32.895 } 00:04:32.895 ]' 00:04:32.895 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.154 [2024-12-11 13:02:24.484253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:33.154 [2024-12-11 13:02:24.484428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.154 [2024-12-11 13:02:24.484488] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:33.154 [2024-12-11 13:02:24.484601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.154 [2024-12-11 13:02:24.487696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.154 [2024-12-11 13:02:24.487845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.154 Passthru0 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.154 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.154 { 00:04:33.154 "name": "Malloc2", 00:04:33.154 "aliases": [ 00:04:33.154 "69c5651b-5ae9-4a74-8e9c-60b1de0003f4" 00:04:33.154 ], 00:04:33.154 "product_name": "Malloc disk", 00:04:33.154 "block_size": 512, 00:04:33.154 "num_blocks": 16384, 00:04:33.154 "uuid": "69c5651b-5ae9-4a74-8e9c-60b1de0003f4", 00:04:33.154 "assigned_rate_limits": { 00:04:33.154 "rw_ios_per_sec": 0, 00:04:33.154 "rw_mbytes_per_sec": 0, 00:04:33.154 "r_mbytes_per_sec": 0, 00:04:33.154 "w_mbytes_per_sec": 0 00:04:33.154 }, 00:04:33.154 "claimed": true, 00:04:33.154 "claim_type": "exclusive_write", 00:04:33.154 "zoned": false, 00:04:33.154 "supported_io_types": { 00:04:33.154 "read": true, 00:04:33.154 "write": true, 00:04:33.154 "unmap": true, 00:04:33.154 "flush": true, 00:04:33.154 "reset": true, 00:04:33.154 "nvme_admin": false, 00:04:33.154 "nvme_io": false, 00:04:33.154 "nvme_io_md": false, 00:04:33.155 "write_zeroes": true, 00:04:33.155 "zcopy": true, 00:04:33.155 "get_zone_info": false, 00:04:33.155 "zone_management": false, 00:04:33.155 "zone_append": false, 00:04:33.155 "compare": false, 00:04:33.155 "compare_and_write": false, 00:04:33.155 "abort": true, 00:04:33.155 "seek_hole": false, 00:04:33.155 "seek_data": false, 00:04:33.155 "copy": true, 00:04:33.155 "nvme_iov_md": false 00:04:33.155 }, 00:04:33.155 "memory_domains": [ 00:04:33.155 { 00:04:33.155 "dma_device_id": "system", 00:04:33.155 "dma_device_type": 1 00:04:33.155 }, 00:04:33.155 { 00:04:33.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.155 "dma_device_type": 2 00:04:33.155 } 00:04:33.155 ], 00:04:33.155 "driver_specific": {} 00:04:33.155 }, 00:04:33.155 { 00:04:33.155 "name": "Passthru0", 00:04:33.155 "aliases": [ 00:04:33.155 "680beecb-6ec2-560a-8bb7-e8306da01f67" 00:04:33.155 ], 00:04:33.155 "product_name": "passthru", 00:04:33.155 "block_size": 512, 00:04:33.155 "num_blocks": 16384, 00:04:33.155 "uuid": "680beecb-6ec2-560a-8bb7-e8306da01f67", 00:04:33.155 "assigned_rate_limits": { 00:04:33.155 "rw_ios_per_sec": 0, 00:04:33.155 "rw_mbytes_per_sec": 0, 00:04:33.155 "r_mbytes_per_sec": 0, 00:04:33.155 "w_mbytes_per_sec": 0 00:04:33.155 }, 00:04:33.155 "claimed": false, 00:04:33.155 "zoned": false, 00:04:33.155 "supported_io_types": { 00:04:33.155 "read": true, 00:04:33.155 "write": true, 00:04:33.155 "unmap": true, 00:04:33.155 "flush": true, 00:04:33.155 "reset": true, 00:04:33.155 "nvme_admin": false, 00:04:33.155 "nvme_io": false, 00:04:33.155 "nvme_io_md": false, 00:04:33.155 "write_zeroes": true, 00:04:33.155 "zcopy": true, 00:04:33.155 "get_zone_info": false, 00:04:33.155 "zone_management": false, 00:04:33.155 "zone_append": false, 00:04:33.155 "compare": false, 00:04:33.155 "compare_and_write": false, 00:04:33.155 "abort": true, 00:04:33.155 "seek_hole": false, 00:04:33.155 "seek_data": false, 00:04:33.155 "copy": true, 00:04:33.155 "nvme_iov_md": false 00:04:33.155 }, 00:04:33.155 "memory_domains": [ 00:04:33.155 { 00:04:33.155 "dma_device_id": "system", 00:04:33.155 "dma_device_type": 1 00:04:33.155 }, 00:04:33.155 { 00:04:33.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.155 "dma_device_type": 2 00:04:33.155 } 00:04:33.155 ], 00:04:33.155 "driver_specific": { 00:04:33.155 "passthru": { 00:04:33.155 "name": "Passthru0", 00:04:33.155 "base_bdev_name": "Malloc2" 00:04:33.155 } 00:04:33.155 } 00:04:33.155 } 00:04:33.155 ]' 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.155 00:04:33.155 real 0m0.356s 00:04:33.155 user 0m0.184s 00:04:33.155 sys 0m0.063s 00:04:33.155 ************************************ 00:04:33.155 END TEST rpc_daemon_integrity 00:04:33.155 ************************************ 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.155 13:02:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.414 13:02:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:33.414 13:02:24 rpc -- rpc/rpc.sh@84 -- # killprocess 58985 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 58985 ']' 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@958 -- # kill -0 58985 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58985 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.414 killing process with pid 58985 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58985' 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@973 -- # kill 58985 00:04:33.414 13:02:24 rpc -- common/autotest_common.sh@978 -- # wait 58985 00:04:35.951 00:04:35.951 real 0m5.938s 00:04:35.951 user 0m6.185s 00:04:35.951 sys 0m1.259s 00:04:35.951 ************************************ 00:04:35.952 END TEST rpc 00:04:35.952 ************************************ 00:04:35.952 13:02:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.952 13:02:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.211 13:02:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:36.211 13:02:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.211 13:02:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.211 13:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:36.211 ************************************ 00:04:36.211 START TEST skip_rpc 00:04:36.211 ************************************ 00:04:36.211 13:02:27 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:36.211 * Looking for test storage... 00:04:36.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.211 13:02:27 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.211 13:02:27 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.211 13:02:27 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.211 13:02:27 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.211 13:02:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.471 13:02:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.471 --rc genhtml_branch_coverage=1 00:04:36.471 --rc genhtml_function_coverage=1 00:04:36.471 --rc genhtml_legend=1 00:04:36.471 --rc geninfo_all_blocks=1 00:04:36.471 --rc geninfo_unexecuted_blocks=1 00:04:36.471 00:04:36.471 ' 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.471 --rc genhtml_branch_coverage=1 00:04:36.471 --rc genhtml_function_coverage=1 00:04:36.471 --rc genhtml_legend=1 00:04:36.471 --rc geninfo_all_blocks=1 00:04:36.471 --rc geninfo_unexecuted_blocks=1 00:04:36.471 00:04:36.471 ' 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.471 --rc genhtml_branch_coverage=1 00:04:36.471 --rc genhtml_function_coverage=1 00:04:36.471 --rc genhtml_legend=1 00:04:36.471 --rc geninfo_all_blocks=1 00:04:36.471 --rc geninfo_unexecuted_blocks=1 00:04:36.471 00:04:36.471 ' 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.471 --rc genhtml_branch_coverage=1 00:04:36.471 --rc genhtml_function_coverage=1 00:04:36.471 --rc genhtml_legend=1 00:04:36.471 --rc geninfo_all_blocks=1 00:04:36.471 --rc geninfo_unexecuted_blocks=1 00:04:36.471 00:04:36.471 ' 00:04:36.471 13:02:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.471 13:02:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:36.471 13:02:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.471 13:02:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.471 ************************************ 00:04:36.471 START TEST skip_rpc 00:04:36.471 ************************************ 00:04:36.471 13:02:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:36.472 13:02:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59225 00:04:36.472 13:02:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.472 13:02:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.472 13:02:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.472 [2024-12-11 13:02:27.925944] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:04:36.472 [2024-12-11 13:02:27.926093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:04:36.731 [2024-12-11 13:02:28.109758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.731 [2024-12-11 13:02:28.249977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59225 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59225 ']' 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59225 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59225 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.009 killing process with pid 59225 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59225' 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59225 00:04:42.009 13:02:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59225 00:04:44.545 00:04:44.545 real 0m7.757s 00:04:44.545 user 0m7.086s 00:04:44.545 sys 0m0.591s 00:04:44.545 13:02:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.545 13:02:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.545 ************************************ 00:04:44.545 END TEST skip_rpc 00:04:44.545 ************************************ 00:04:44.545 13:02:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.545 13:02:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.545 13:02:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.545 13:02:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.545 ************************************ 00:04:44.545 START TEST skip_rpc_with_json 00:04:44.545 ************************************ 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59329 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59329 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59329 ']' 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.545 13:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.545 [2024-12-11 13:02:35.770096] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:04:44.545 [2024-12-11 13:02:35.770315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:04:44.545 [2024-12-11 13:02:35.950940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.545 [2024-12-11 13:02:36.095380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.924 [2024-12-11 13:02:37.137974] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.924 request: 00:04:45.924 { 00:04:45.924 "trtype": "tcp", 00:04:45.924 "method": "nvmf_get_transports", 00:04:45.924 "req_id": 1 00:04:45.924 } 00:04:45.924 Got JSON-RPC error response 00:04:45.924 response: 00:04:45.924 { 00:04:45.924 "code": -19, 00:04:45.924 "message": "No such device" 00:04:45.924 } 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:45.924 13:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.925 [2024-12-11 13:02:37.154147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.925 13:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.925 { 00:04:45.925 "subsystems": [ 00:04:45.925 { 00:04:45.925 "subsystem": "fsdev", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "fsdev_set_opts", 00:04:45.925 "params": { 00:04:45.925 "fsdev_io_pool_size": 65535, 00:04:45.925 "fsdev_io_cache_size": 256 00:04:45.925 } 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "keyring", 00:04:45.925 "config": [] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "iobuf", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "iobuf_set_options", 00:04:45.925 "params": { 00:04:45.925 "small_pool_count": 8192, 00:04:45.925 "large_pool_count": 1024, 00:04:45.925 "small_bufsize": 8192, 00:04:45.925 "large_bufsize": 135168, 00:04:45.925 "enable_numa": false 00:04:45.925 } 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "sock", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "sock_set_default_impl", 00:04:45.925 "params": { 00:04:45.925 "impl_name": "posix" 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "sock_impl_set_options", 00:04:45.925 "params": { 00:04:45.925 "impl_name": "ssl", 00:04:45.925 "recv_buf_size": 4096, 00:04:45.925 "send_buf_size": 4096, 00:04:45.925 "enable_recv_pipe": true, 00:04:45.925 "enable_quickack": false, 00:04:45.925 "enable_placement_id": 0, 00:04:45.925 "enable_zerocopy_send_server": true, 00:04:45.925 "enable_zerocopy_send_client": false, 00:04:45.925 "zerocopy_threshold": 0, 00:04:45.925 "tls_version": 0, 00:04:45.925 "enable_ktls": false 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "sock_impl_set_options", 00:04:45.925 "params": { 00:04:45.925 "impl_name": "posix", 00:04:45.925 "recv_buf_size": 2097152, 00:04:45.925 "send_buf_size": 2097152, 00:04:45.925 "enable_recv_pipe": true, 00:04:45.925 "enable_quickack": false, 00:04:45.925 "enable_placement_id": 0, 00:04:45.925 "enable_zerocopy_send_server": true, 00:04:45.925 "enable_zerocopy_send_client": false, 00:04:45.925 "zerocopy_threshold": 0, 00:04:45.925 "tls_version": 0, 00:04:45.925 "enable_ktls": false 00:04:45.925 } 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "vmd", 00:04:45.925 "config": [] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "accel", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "accel_set_options", 00:04:45.925 "params": { 00:04:45.925 "small_cache_size": 128, 00:04:45.925 "large_cache_size": 16, 00:04:45.925 "task_count": 2048, 00:04:45.925 "sequence_count": 2048, 00:04:45.925 "buf_count": 2048 00:04:45.925 } 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "bdev", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "bdev_set_options", 00:04:45.925 "params": { 00:04:45.925 "bdev_io_pool_size": 65535, 00:04:45.925 "bdev_io_cache_size": 256, 00:04:45.925 "bdev_auto_examine": true, 00:04:45.925 "iobuf_small_cache_size": 128, 00:04:45.925 "iobuf_large_cache_size": 16 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "bdev_raid_set_options", 00:04:45.925 "params": { 00:04:45.925 "process_window_size_kb": 1024, 00:04:45.925 "process_max_bandwidth_mb_sec": 0 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "bdev_iscsi_set_options", 00:04:45.925 "params": { 00:04:45.925 "timeout_sec": 30 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "bdev_nvme_set_options", 00:04:45.925 "params": { 00:04:45.925 "action_on_timeout": "none", 00:04:45.925 "timeout_us": 0, 00:04:45.925 "timeout_admin_us": 0, 00:04:45.925 "keep_alive_timeout_ms": 10000, 00:04:45.925 "arbitration_burst": 0, 00:04:45.925 "low_priority_weight": 0, 00:04:45.925 "medium_priority_weight": 0, 00:04:45.925 "high_priority_weight": 0, 00:04:45.925 "nvme_adminq_poll_period_us": 10000, 00:04:45.925 "nvme_ioq_poll_period_us": 0, 00:04:45.925 "io_queue_requests": 0, 00:04:45.925 "delay_cmd_submit": true, 00:04:45.925 "transport_retry_count": 4, 00:04:45.925 "bdev_retry_count": 3, 00:04:45.925 "transport_ack_timeout": 0, 00:04:45.925 "ctrlr_loss_timeout_sec": 0, 00:04:45.925 "reconnect_delay_sec": 0, 00:04:45.925 "fast_io_fail_timeout_sec": 0, 00:04:45.925 "disable_auto_failback": false, 00:04:45.925 "generate_uuids": false, 00:04:45.925 "transport_tos": 0, 00:04:45.925 "nvme_error_stat": false, 00:04:45.925 "rdma_srq_size": 0, 00:04:45.925 "io_path_stat": false, 00:04:45.925 "allow_accel_sequence": false, 00:04:45.925 "rdma_max_cq_size": 0, 00:04:45.925 "rdma_cm_event_timeout_ms": 0, 00:04:45.925 "dhchap_digests": [ 00:04:45.925 "sha256", 00:04:45.925 "sha384", 00:04:45.925 "sha512" 00:04:45.925 ], 00:04:45.925 "dhchap_dhgroups": [ 00:04:45.925 "null", 00:04:45.925 "ffdhe2048", 00:04:45.925 "ffdhe3072", 00:04:45.925 "ffdhe4096", 00:04:45.925 "ffdhe6144", 00:04:45.925 "ffdhe8192" 00:04:45.925 ], 00:04:45.925 "rdma_umr_per_io": false 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "bdev_nvme_set_hotplug", 00:04:45.925 "params": { 00:04:45.925 "period_us": 100000, 00:04:45.925 "enable": false 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "bdev_wait_for_examine" 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "scsi", 00:04:45.925 "config": null 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "scheduler", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "framework_set_scheduler", 00:04:45.925 "params": { 00:04:45.925 "name": "static" 00:04:45.925 } 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "vhost_scsi", 00:04:45.925 "config": [] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "vhost_blk", 00:04:45.925 "config": [] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "ublk", 00:04:45.925 "config": [] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "nbd", 00:04:45.925 "config": [] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "nvmf", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "nvmf_set_config", 00:04:45.925 "params": { 00:04:45.925 "discovery_filter": "match_any", 00:04:45.925 "admin_cmd_passthru": { 00:04:45.925 "identify_ctrlr": false 00:04:45.925 }, 00:04:45.925 "dhchap_digests": [ 00:04:45.925 "sha256", 00:04:45.925 "sha384", 00:04:45.925 "sha512" 00:04:45.925 ], 00:04:45.925 "dhchap_dhgroups": [ 00:04:45.925 "null", 00:04:45.925 "ffdhe2048", 00:04:45.925 "ffdhe3072", 00:04:45.925 "ffdhe4096", 00:04:45.925 "ffdhe6144", 00:04:45.925 "ffdhe8192" 00:04:45.925 ] 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "nvmf_set_max_subsystems", 00:04:45.925 "params": { 00:04:45.925 "max_subsystems": 1024 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "nvmf_set_crdt", 00:04:45.925 "params": { 00:04:45.925 "crdt1": 0, 00:04:45.925 "crdt2": 0, 00:04:45.925 "crdt3": 0 00:04:45.925 } 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "method": "nvmf_create_transport", 00:04:45.925 "params": { 00:04:45.925 "trtype": "TCP", 00:04:45.925 "max_queue_depth": 128, 00:04:45.925 "max_io_qpairs_per_ctrlr": 127, 00:04:45.925 "in_capsule_data_size": 4096, 00:04:45.925 "max_io_size": 131072, 00:04:45.925 "io_unit_size": 131072, 00:04:45.925 "max_aq_depth": 128, 00:04:45.925 "num_shared_buffers": 511, 00:04:45.925 "buf_cache_size": 4294967295, 00:04:45.925 "dif_insert_or_strip": false, 00:04:45.925 "zcopy": false, 00:04:45.925 "c2h_success": true, 00:04:45.925 "sock_priority": 0, 00:04:45.925 "abort_timeout_sec": 1, 00:04:45.925 "ack_timeout": 0, 00:04:45.925 "data_wr_pool_size": 0 00:04:45.925 } 00:04:45.925 } 00:04:45.925 ] 00:04:45.925 }, 00:04:45.925 { 00:04:45.925 "subsystem": "iscsi", 00:04:45.925 "config": [ 00:04:45.925 { 00:04:45.925 "method": "iscsi_set_options", 00:04:45.925 "params": { 00:04:45.925 "node_base": "iqn.2016-06.io.spdk", 00:04:45.925 "max_sessions": 128, 00:04:45.925 "max_connections_per_session": 2, 00:04:45.925 "max_queue_depth": 64, 00:04:45.925 "default_time2wait": 2, 00:04:45.925 "default_time2retain": 20, 00:04:45.925 "first_burst_length": 8192, 00:04:45.926 "immediate_data": true, 00:04:45.926 "allow_duplicated_isid": false, 00:04:45.926 "error_recovery_level": 0, 00:04:45.926 "nop_timeout": 60, 00:04:45.926 "nop_in_interval": 30, 00:04:45.926 "disable_chap": false, 00:04:45.926 "require_chap": false, 00:04:45.926 "mutual_chap": false, 00:04:45.926 "chap_group": 0, 00:04:45.926 "max_large_datain_per_connection": 64, 00:04:45.926 "max_r2t_per_connection": 4, 00:04:45.926 "pdu_pool_size": 36864, 00:04:45.926 "immediate_data_pool_size": 16384, 00:04:45.926 "data_out_pool_size": 2048 00:04:45.926 } 00:04:45.926 } 00:04:45.926 ] 00:04:45.926 } 00:04:45.926 ] 00:04:45.926 } 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59329 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59329 ']' 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59329 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59329 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.926 killing process with pid 59329 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59329' 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59329 00:04:45.926 13:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59329 00:04:49.217 13:02:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59385 00:04:49.217 13:02:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.217 13:02:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59385 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59385 ']' 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59385 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59385 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.494 killing process with pid 59385 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59385' 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59385 00:04:54.494 13:02:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59385 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:56.401 00:04:56.401 real 0m12.217s 00:04:56.401 user 0m11.250s 00:04:56.401 sys 0m1.291s 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.401 ************************************ 00:04:56.401 END TEST skip_rpc_with_json 00:04:56.401 ************************************ 00:04:56.401 13:02:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:56.401 13:02:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.401 13:02:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.401 13:02:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.401 ************************************ 00:04:56.401 START TEST skip_rpc_with_delay 00:04:56.401 ************************************ 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.401 13:02:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.661 [2024-12-11 13:02:48.064164] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:56.661 ************************************ 00:04:56.661 END TEST skip_rpc_with_delay 00:04:56.661 ************************************ 00:04:56.661 13:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:56.661 13:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.661 13:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.661 13:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.661 00:04:56.661 real 0m0.194s 00:04:56.661 user 0m0.096s 00:04:56.661 sys 0m0.097s 00:04:56.661 13:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.661 13:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:56.661 13:02:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:56.661 13:02:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:56.661 13:02:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:56.661 13:02:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.661 13:02:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.661 13:02:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.661 ************************************ 00:04:56.661 START TEST exit_on_failed_rpc_init 00:04:56.661 ************************************ 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59524 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59524 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59524 ']' 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.661 13:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.920 [2024-12-11 13:02:48.333775] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:04:56.920 [2024-12-11 13:02:48.334145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59524 ] 00:04:57.179 [2024-12-11 13:02:48.517849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.179 [2024-12-11 13:02:48.659492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.559 13:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.559 [2024-12-11 13:02:49.834342] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:04:58.560 [2024-12-11 13:02:49.834482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:04:58.560 [2024-12-11 13:02:50.025247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.818 [2024-12-11 13:02:50.172458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.819 [2024-12-11 13:02:50.172583] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.819 [2024-12-11 13:02:50.172602] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.819 [2024-12-11 13:02:50.172625] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59524 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59524 ']' 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59524 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59524 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59524' 00:04:59.078 killing process with pid 59524 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59524 00:04:59.078 13:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59524 00:05:02.369 00:05:02.369 real 0m4.993s 00:05:02.369 user 0m5.143s 00:05:02.369 sys 0m0.847s 00:05:02.369 ************************************ 00:05:02.369 END TEST exit_on_failed_rpc_init 00:05:02.369 ************************************ 00:05:02.369 13:02:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.369 13:02:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.369 13:02:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.369 00:05:02.369 real 0m25.714s 00:05:02.369 user 0m23.800s 00:05:02.369 sys 0m3.153s 00:05:02.369 13:02:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.369 ************************************ 00:05:02.369 END TEST skip_rpc 00:05:02.369 ************************************ 00:05:02.369 13:02:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.369 13:02:53 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.369 13:02:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.369 13:02:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.369 13:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.369 ************************************ 00:05:02.369 START TEST rpc_client 00:05:02.369 ************************************ 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.369 * Looking for test storage... 00:05:02.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.369 13:02:53 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.369 --rc genhtml_branch_coverage=1 00:05:02.369 --rc genhtml_function_coverage=1 00:05:02.369 --rc genhtml_legend=1 00:05:02.369 --rc geninfo_all_blocks=1 00:05:02.369 --rc geninfo_unexecuted_blocks=1 00:05:02.369 00:05:02.369 ' 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.369 --rc genhtml_branch_coverage=1 00:05:02.369 --rc genhtml_function_coverage=1 00:05:02.369 --rc genhtml_legend=1 00:05:02.369 --rc geninfo_all_blocks=1 00:05:02.369 --rc geninfo_unexecuted_blocks=1 00:05:02.369 00:05:02.369 ' 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.369 --rc genhtml_branch_coverage=1 00:05:02.369 --rc genhtml_function_coverage=1 00:05:02.369 --rc genhtml_legend=1 00:05:02.369 --rc geninfo_all_blocks=1 00:05:02.369 --rc geninfo_unexecuted_blocks=1 00:05:02.369 00:05:02.369 ' 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.369 --rc genhtml_branch_coverage=1 00:05:02.369 --rc genhtml_function_coverage=1 00:05:02.369 --rc genhtml_legend=1 00:05:02.369 --rc geninfo_all_blocks=1 00:05:02.369 --rc geninfo_unexecuted_blocks=1 00:05:02.369 00:05:02.369 ' 00:05:02.369 13:02:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:02.369 OK 00:05:02.369 13:02:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.369 00:05:02.369 real 0m0.323s 00:05:02.369 user 0m0.168s 00:05:02.369 sys 0m0.166s 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.369 13:02:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.369 ************************************ 00:05:02.369 END TEST rpc_client 00:05:02.369 ************************************ 00:05:02.369 13:02:53 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.369 13:02:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.369 13:02:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.369 13:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.369 ************************************ 00:05:02.369 START TEST json_config 00:05:02.369 ************************************ 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.369 13:02:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.369 13:02:53 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.369 13:02:53 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.369 13:02:53 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.369 13:02:53 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.369 13:02:53 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:02.369 13:02:53 json_config -- scripts/common.sh@345 -- # : 1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.369 13:02:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.369 13:02:53 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@353 -- # local d=1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.369 13:02:53 json_config -- scripts/common.sh@355 -- # echo 1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.369 13:02:53 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@353 -- # local d=2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.369 13:02:53 json_config -- scripts/common.sh@355 -- # echo 2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.369 13:02:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.369 13:02:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.369 13:02:53 json_config -- scripts/common.sh@368 -- # return 0 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.369 --rc genhtml_branch_coverage=1 00:05:02.369 --rc genhtml_function_coverage=1 00:05:02.369 --rc genhtml_legend=1 00:05:02.369 --rc geninfo_all_blocks=1 00:05:02.369 --rc geninfo_unexecuted_blocks=1 00:05:02.369 00:05:02.369 ' 00:05:02.369 13:02:53 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.369 --rc genhtml_branch_coverage=1 00:05:02.369 --rc genhtml_function_coverage=1 00:05:02.369 --rc genhtml_legend=1 00:05:02.369 --rc geninfo_all_blocks=1 00:05:02.369 --rc geninfo_unexecuted_blocks=1 00:05:02.370 00:05:02.370 ' 00:05:02.370 13:02:53 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.370 --rc genhtml_branch_coverage=1 00:05:02.370 --rc genhtml_function_coverage=1 00:05:02.370 --rc genhtml_legend=1 00:05:02.370 --rc geninfo_all_blocks=1 00:05:02.370 --rc geninfo_unexecuted_blocks=1 00:05:02.370 00:05:02.370 ' 00:05:02.370 13:02:53 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.370 --rc genhtml_branch_coverage=1 00:05:02.370 --rc genhtml_function_coverage=1 00:05:02.370 --rc genhtml_legend=1 00:05:02.370 --rc geninfo_all_blocks=1 00:05:02.370 --rc geninfo_unexecuted_blocks=1 00:05:02.370 00:05:02.370 ' 00:05:02.370 13:02:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.370 13:02:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a66a5b23-8ddc-4859-b95b-bc5833e58729 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a66a5b23-8ddc-4859-b95b-bc5833e58729 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.630 13:02:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.630 13:02:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.630 13:02:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.630 13:02:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.630 13:02:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.630 13:02:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.630 13:02:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.630 13:02:53 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.630 13:02:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@51 -- # : 0 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.630 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.630 13:02:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.630 WARNING: No tests are enabled so not running JSON configuration tests 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:02.630 13:02:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:02.630 00:05:02.630 real 0m0.220s 00:05:02.630 user 0m0.121s 00:05:02.630 sys 0m0.110s 00:05:02.630 13:02:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.630 13:02:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.630 ************************************ 00:05:02.630 END TEST json_config 00:05:02.630 ************************************ 00:05:02.630 13:02:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.630 13:02:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.630 13:02:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.631 13:02:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.631 ************************************ 00:05:02.631 START TEST json_config_extra_key 00:05:02.631 ************************************ 00:05:02.631 13:02:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.631 13:02:54 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.631 13:02:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.631 13:02:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.891 13:02:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:02.891 13:02:54 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.891 13:02:54 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.891 --rc genhtml_branch_coverage=1 00:05:02.891 --rc genhtml_function_coverage=1 00:05:02.891 --rc genhtml_legend=1 00:05:02.891 --rc geninfo_all_blocks=1 00:05:02.891 --rc geninfo_unexecuted_blocks=1 00:05:02.891 00:05:02.891 ' 00:05:02.891 13:02:54 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.891 --rc genhtml_branch_coverage=1 00:05:02.891 --rc genhtml_function_coverage=1 00:05:02.891 --rc genhtml_legend=1 00:05:02.891 --rc geninfo_all_blocks=1 00:05:02.891 --rc geninfo_unexecuted_blocks=1 00:05:02.891 00:05:02.891 ' 00:05:02.891 13:02:54 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.891 --rc genhtml_branch_coverage=1 00:05:02.891 --rc genhtml_function_coverage=1 00:05:02.891 --rc genhtml_legend=1 00:05:02.891 --rc geninfo_all_blocks=1 00:05:02.891 --rc geninfo_unexecuted_blocks=1 00:05:02.891 00:05:02.891 ' 00:05:02.891 13:02:54 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.891 --rc genhtml_branch_coverage=1 00:05:02.891 --rc genhtml_function_coverage=1 00:05:02.891 --rc genhtml_legend=1 00:05:02.891 --rc geninfo_all_blocks=1 00:05:02.891 --rc geninfo_unexecuted_blocks=1 00:05:02.891 00:05:02.891 ' 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a66a5b23-8ddc-4859-b95b-bc5833e58729 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a66a5b23-8ddc-4859-b95b-bc5833e58729 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.891 13:02:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.891 13:02:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.891 13:02:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.891 13:02:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.891 13:02:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:02.891 13:02:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.891 13:02:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:02.891 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:02.892 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:02.892 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.892 INFO: launching applications... 00:05:02.892 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:02.892 13:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59763 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.892 Waiting for target to run... 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59763 /var/tmp/spdk_tgt.sock 00:05:02.892 13:02:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.892 13:02:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59763 ']' 00:05:02.892 13:02:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.892 13:02:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.892 13:02:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.892 13:02:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.892 13:02:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.892 [2024-12-11 13:02:54.379147] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:02.892 [2024-12-11 13:02:54.379288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59763 ] 00:05:03.461 [2024-12-11 13:02:54.967346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.729 [2024-12-11 13:02:55.096551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.299 13:02:55 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.299 00:05:04.299 13:02:55 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:04.299 13:02:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.299 INFO: shutting down applications... 00:05:04.300 13:02:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.300 13:02:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59763 ]] 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59763 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:04.300 13:02:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.869 13:02:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.869 13:02:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.869 13:02:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:04.869 13:02:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.437 13:02:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.437 13:02:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.437 13:02:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:05.437 13:02:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.005 13:02:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.005 13:02:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.005 13:02:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:06.005 13:02:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.574 13:02:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.574 13:02:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.574 13:02:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:06.574 13:02:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.832 13:02:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.832 13:02:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.832 13:02:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:06.832 13:02:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.400 13:02:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.400 13:02:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.400 13:02:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:07.400 13:02:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.968 13:02:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.968 13:02:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.968 13:02:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59763 00:05:07.968 13:02:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.968 13:02:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.968 13:02:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.968 SPDK target shutdown done 00:05:07.969 13:02:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.969 Success 00:05:07.969 13:02:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.969 00:05:07.969 real 0m5.365s 00:05:07.969 user 0m4.391s 00:05:07.969 sys 0m0.864s 00:05:07.969 13:02:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.969 13:02:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.969 ************************************ 00:05:07.969 END TEST json_config_extra_key 00:05:07.969 ************************************ 00:05:07.969 13:02:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.969 13:02:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.969 13:02:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.969 13:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:07.969 ************************************ 00:05:07.969 START TEST alias_rpc 00:05:07.969 ************************************ 00:05:07.969 13:02:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.228 * Looking for test storage... 00:05:08.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:08.228 13:02:59 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.228 13:02:59 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.228 13:02:59 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.228 13:02:59 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.228 13:02:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.229 13:02:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.229 --rc genhtml_branch_coverage=1 00:05:08.229 --rc genhtml_function_coverage=1 00:05:08.229 --rc genhtml_legend=1 00:05:08.229 --rc geninfo_all_blocks=1 00:05:08.229 --rc geninfo_unexecuted_blocks=1 00:05:08.229 00:05:08.229 ' 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.229 --rc genhtml_branch_coverage=1 00:05:08.229 --rc genhtml_function_coverage=1 00:05:08.229 --rc genhtml_legend=1 00:05:08.229 --rc geninfo_all_blocks=1 00:05:08.229 --rc geninfo_unexecuted_blocks=1 00:05:08.229 00:05:08.229 ' 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.229 --rc genhtml_branch_coverage=1 00:05:08.229 --rc genhtml_function_coverage=1 00:05:08.229 --rc genhtml_legend=1 00:05:08.229 --rc geninfo_all_blocks=1 00:05:08.229 --rc geninfo_unexecuted_blocks=1 00:05:08.229 00:05:08.229 ' 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.229 --rc genhtml_branch_coverage=1 00:05:08.229 --rc genhtml_function_coverage=1 00:05:08.229 --rc genhtml_legend=1 00:05:08.229 --rc geninfo_all_blocks=1 00:05:08.229 --rc geninfo_unexecuted_blocks=1 00:05:08.229 00:05:08.229 ' 00:05:08.229 13:02:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.229 13:02:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.229 13:02:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59881 00:05:08.229 13:02:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59881 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59881 ']' 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.229 13:02:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.488 [2024-12-11 13:02:59.823421] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:08.488 [2024-12-11 13:02:59.823701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:05:08.488 [2024-12-11 13:03:00.013922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.747 [2024-12-11 13:03:00.165251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.685 13:03:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.685 13:03:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:09.685 13:03:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:09.944 13:03:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59881 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59881 ']' 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59881 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59881 00:05:09.944 killing process with pid 59881 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59881' 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 59881 00:05:09.944 13:03:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 59881 00:05:13.233 ************************************ 00:05:13.233 END TEST alias_rpc 00:05:13.233 ************************************ 00:05:13.233 00:05:13.233 real 0m4.722s 00:05:13.233 user 0m4.471s 00:05:13.233 sys 0m0.824s 00:05:13.233 13:03:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.233 13:03:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.233 13:03:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:13.233 13:03:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.233 13:03:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.233 13:03:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.233 13:03:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.233 ************************************ 00:05:13.233 START TEST spdkcli_tcp 00:05:13.233 ************************************ 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.233 * Looking for test storage... 00:05:13.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.233 13:03:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.233 --rc genhtml_branch_coverage=1 00:05:13.233 --rc genhtml_function_coverage=1 00:05:13.233 --rc genhtml_legend=1 00:05:13.233 --rc geninfo_all_blocks=1 00:05:13.233 --rc geninfo_unexecuted_blocks=1 00:05:13.233 00:05:13.233 ' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.233 --rc genhtml_branch_coverage=1 00:05:13.233 --rc genhtml_function_coverage=1 00:05:13.233 --rc genhtml_legend=1 00:05:13.233 --rc geninfo_all_blocks=1 00:05:13.233 --rc geninfo_unexecuted_blocks=1 00:05:13.233 00:05:13.233 ' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.233 --rc genhtml_branch_coverage=1 00:05:13.233 --rc genhtml_function_coverage=1 00:05:13.233 --rc genhtml_legend=1 00:05:13.233 --rc geninfo_all_blocks=1 00:05:13.233 --rc geninfo_unexecuted_blocks=1 00:05:13.233 00:05:13.233 ' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.233 --rc genhtml_branch_coverage=1 00:05:13.233 --rc genhtml_function_coverage=1 00:05:13.233 --rc genhtml_legend=1 00:05:13.233 --rc geninfo_all_blocks=1 00:05:13.233 --rc geninfo_unexecuted_blocks=1 00:05:13.233 00:05:13.233 ' 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59999 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.233 13:03:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59999 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59999 ']' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.233 13:03:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.233 [2024-12-11 13:03:04.624859] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:13.233 [2024-12-11 13:03:04.625000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:05:13.493 [2024-12-11 13:03:04.809500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.493 [2024-12-11 13:03:04.947718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.493 [2024-12-11 13:03:04.947750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.431 13:03:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.431 13:03:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:14.431 13:03:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60016 00:05:14.431 13:03:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.431 13:03:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:14.700 [ 00:05:14.700 "bdev_malloc_delete", 00:05:14.700 "bdev_malloc_create", 00:05:14.700 "bdev_null_resize", 00:05:14.700 "bdev_null_delete", 00:05:14.700 "bdev_null_create", 00:05:14.700 "bdev_nvme_cuse_unregister", 00:05:14.700 "bdev_nvme_cuse_register", 00:05:14.700 "bdev_opal_new_user", 00:05:14.700 "bdev_opal_set_lock_state", 00:05:14.700 "bdev_opal_delete", 00:05:14.700 "bdev_opal_get_info", 00:05:14.700 "bdev_opal_create", 00:05:14.700 "bdev_nvme_opal_revert", 00:05:14.701 "bdev_nvme_opal_init", 00:05:14.701 "bdev_nvme_send_cmd", 00:05:14.701 "bdev_nvme_set_keys", 00:05:14.701 "bdev_nvme_get_path_iostat", 00:05:14.701 "bdev_nvme_get_mdns_discovery_info", 00:05:14.701 "bdev_nvme_stop_mdns_discovery", 00:05:14.701 "bdev_nvme_start_mdns_discovery", 00:05:14.701 "bdev_nvme_set_multipath_policy", 00:05:14.701 "bdev_nvme_set_preferred_path", 00:05:14.701 "bdev_nvme_get_io_paths", 00:05:14.701 "bdev_nvme_remove_error_injection", 00:05:14.701 "bdev_nvme_add_error_injection", 00:05:14.701 "bdev_nvme_get_discovery_info", 00:05:14.701 "bdev_nvme_stop_discovery", 00:05:14.701 "bdev_nvme_start_discovery", 00:05:14.701 "bdev_nvme_get_controller_health_info", 00:05:14.701 "bdev_nvme_disable_controller", 00:05:14.701 "bdev_nvme_enable_controller", 00:05:14.701 "bdev_nvme_reset_controller", 00:05:14.701 "bdev_nvme_get_transport_statistics", 00:05:14.701 "bdev_nvme_apply_firmware", 00:05:14.701 "bdev_nvme_detach_controller", 00:05:14.701 "bdev_nvme_get_controllers", 00:05:14.701 "bdev_nvme_attach_controller", 00:05:14.701 "bdev_nvme_set_hotplug", 00:05:14.701 "bdev_nvme_set_options", 00:05:14.701 "bdev_passthru_delete", 00:05:14.701 "bdev_passthru_create", 00:05:14.701 "bdev_lvol_set_parent_bdev", 00:05:14.701 "bdev_lvol_set_parent", 00:05:14.701 "bdev_lvol_check_shallow_copy", 00:05:14.701 "bdev_lvol_start_shallow_copy", 00:05:14.701 "bdev_lvol_grow_lvstore", 00:05:14.701 "bdev_lvol_get_lvols", 00:05:14.701 "bdev_lvol_get_lvstores", 00:05:14.701 "bdev_lvol_delete", 00:05:14.701 "bdev_lvol_set_read_only", 00:05:14.701 "bdev_lvol_resize", 00:05:14.701 "bdev_lvol_decouple_parent", 00:05:14.701 "bdev_lvol_inflate", 00:05:14.701 "bdev_lvol_rename", 00:05:14.701 "bdev_lvol_clone_bdev", 00:05:14.701 "bdev_lvol_clone", 00:05:14.701 "bdev_lvol_snapshot", 00:05:14.701 "bdev_lvol_create", 00:05:14.701 "bdev_lvol_delete_lvstore", 00:05:14.701 "bdev_lvol_rename_lvstore", 00:05:14.701 "bdev_lvol_create_lvstore", 00:05:14.701 "bdev_raid_set_options", 00:05:14.701 "bdev_raid_remove_base_bdev", 00:05:14.701 "bdev_raid_add_base_bdev", 00:05:14.701 "bdev_raid_delete", 00:05:14.701 "bdev_raid_create", 00:05:14.701 "bdev_raid_get_bdevs", 00:05:14.701 "bdev_error_inject_error", 00:05:14.701 "bdev_error_delete", 00:05:14.701 "bdev_error_create", 00:05:14.701 "bdev_split_delete", 00:05:14.701 "bdev_split_create", 00:05:14.701 "bdev_delay_delete", 00:05:14.701 "bdev_delay_create", 00:05:14.701 "bdev_delay_update_latency", 00:05:14.701 "bdev_zone_block_delete", 00:05:14.701 "bdev_zone_block_create", 00:05:14.701 "blobfs_create", 00:05:14.701 "blobfs_detect", 00:05:14.701 "blobfs_set_cache_size", 00:05:14.701 "bdev_xnvme_delete", 00:05:14.701 "bdev_xnvme_create", 00:05:14.701 "bdev_aio_delete", 00:05:14.701 "bdev_aio_rescan", 00:05:14.701 "bdev_aio_create", 00:05:14.701 "bdev_ftl_set_property", 00:05:14.701 "bdev_ftl_get_properties", 00:05:14.701 "bdev_ftl_get_stats", 00:05:14.701 "bdev_ftl_unmap", 00:05:14.701 "bdev_ftl_unload", 00:05:14.701 "bdev_ftl_delete", 00:05:14.701 "bdev_ftl_load", 00:05:14.701 "bdev_ftl_create", 00:05:14.701 "bdev_virtio_attach_controller", 00:05:14.701 "bdev_virtio_scsi_get_devices", 00:05:14.701 "bdev_virtio_detach_controller", 00:05:14.701 "bdev_virtio_blk_set_hotplug", 00:05:14.701 "bdev_iscsi_delete", 00:05:14.701 "bdev_iscsi_create", 00:05:14.701 "bdev_iscsi_set_options", 00:05:14.701 "accel_error_inject_error", 00:05:14.701 "ioat_scan_accel_module", 00:05:14.701 "dsa_scan_accel_module", 00:05:14.701 "iaa_scan_accel_module", 00:05:14.701 "keyring_file_remove_key", 00:05:14.701 "keyring_file_add_key", 00:05:14.701 "keyring_linux_set_options", 00:05:14.701 "fsdev_aio_delete", 00:05:14.701 "fsdev_aio_create", 00:05:14.701 "iscsi_get_histogram", 00:05:14.701 "iscsi_enable_histogram", 00:05:14.701 "iscsi_set_options", 00:05:14.701 "iscsi_get_auth_groups", 00:05:14.701 "iscsi_auth_group_remove_secret", 00:05:14.701 "iscsi_auth_group_add_secret", 00:05:14.701 "iscsi_delete_auth_group", 00:05:14.701 "iscsi_create_auth_group", 00:05:14.701 "iscsi_set_discovery_auth", 00:05:14.701 "iscsi_get_options", 00:05:14.701 "iscsi_target_node_request_logout", 00:05:14.701 "iscsi_target_node_set_redirect", 00:05:14.701 "iscsi_target_node_set_auth", 00:05:14.701 "iscsi_target_node_add_lun", 00:05:14.701 "iscsi_get_stats", 00:05:14.701 "iscsi_get_connections", 00:05:14.701 "iscsi_portal_group_set_auth", 00:05:14.701 "iscsi_start_portal_group", 00:05:14.701 "iscsi_delete_portal_group", 00:05:14.701 "iscsi_create_portal_group", 00:05:14.701 "iscsi_get_portal_groups", 00:05:14.701 "iscsi_delete_target_node", 00:05:14.701 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.701 "iscsi_target_node_add_pg_ig_maps", 00:05:14.701 "iscsi_create_target_node", 00:05:14.701 "iscsi_get_target_nodes", 00:05:14.701 "iscsi_delete_initiator_group", 00:05:14.701 "iscsi_initiator_group_remove_initiators", 00:05:14.701 "iscsi_initiator_group_add_initiators", 00:05:14.701 "iscsi_create_initiator_group", 00:05:14.701 "iscsi_get_initiator_groups", 00:05:14.701 "nvmf_set_crdt", 00:05:14.701 "nvmf_set_config", 00:05:14.701 "nvmf_set_max_subsystems", 00:05:14.701 "nvmf_stop_mdns_prr", 00:05:14.701 "nvmf_publish_mdns_prr", 00:05:14.701 "nvmf_subsystem_get_listeners", 00:05:14.701 "nvmf_subsystem_get_qpairs", 00:05:14.701 "nvmf_subsystem_get_controllers", 00:05:14.701 "nvmf_get_stats", 00:05:14.701 "nvmf_get_transports", 00:05:14.701 "nvmf_create_transport", 00:05:14.701 "nvmf_get_targets", 00:05:14.701 "nvmf_delete_target", 00:05:14.701 "nvmf_create_target", 00:05:14.701 "nvmf_subsystem_allow_any_host", 00:05:14.701 "nvmf_subsystem_set_keys", 00:05:14.701 "nvmf_subsystem_remove_host", 00:05:14.701 "nvmf_subsystem_add_host", 00:05:14.701 "nvmf_ns_remove_host", 00:05:14.701 "nvmf_ns_add_host", 00:05:14.701 "nvmf_subsystem_remove_ns", 00:05:14.701 "nvmf_subsystem_set_ns_ana_group", 00:05:14.701 "nvmf_subsystem_add_ns", 00:05:14.701 "nvmf_subsystem_listener_set_ana_state", 00:05:14.701 "nvmf_discovery_get_referrals", 00:05:14.701 "nvmf_discovery_remove_referral", 00:05:14.701 "nvmf_discovery_add_referral", 00:05:14.701 "nvmf_subsystem_remove_listener", 00:05:14.701 "nvmf_subsystem_add_listener", 00:05:14.701 "nvmf_delete_subsystem", 00:05:14.701 "nvmf_create_subsystem", 00:05:14.701 "nvmf_get_subsystems", 00:05:14.701 "env_dpdk_get_mem_stats", 00:05:14.701 "nbd_get_disks", 00:05:14.701 "nbd_stop_disk", 00:05:14.701 "nbd_start_disk", 00:05:14.701 "ublk_recover_disk", 00:05:14.701 "ublk_get_disks", 00:05:14.701 "ublk_stop_disk", 00:05:14.701 "ublk_start_disk", 00:05:14.701 "ublk_destroy_target", 00:05:14.701 "ublk_create_target", 00:05:14.701 "virtio_blk_create_transport", 00:05:14.701 "virtio_blk_get_transports", 00:05:14.701 "vhost_controller_set_coalescing", 00:05:14.701 "vhost_get_controllers", 00:05:14.701 "vhost_delete_controller", 00:05:14.701 "vhost_create_blk_controller", 00:05:14.701 "vhost_scsi_controller_remove_target", 00:05:14.701 "vhost_scsi_controller_add_target", 00:05:14.701 "vhost_start_scsi_controller", 00:05:14.701 "vhost_create_scsi_controller", 00:05:14.701 "thread_set_cpumask", 00:05:14.701 "scheduler_set_options", 00:05:14.701 "framework_get_governor", 00:05:14.701 "framework_get_scheduler", 00:05:14.701 "framework_set_scheduler", 00:05:14.701 "framework_get_reactors", 00:05:14.701 "thread_get_io_channels", 00:05:14.701 "thread_get_pollers", 00:05:14.701 "thread_get_stats", 00:05:14.701 "framework_monitor_context_switch", 00:05:14.701 "spdk_kill_instance", 00:05:14.701 "log_enable_timestamps", 00:05:14.701 "log_get_flags", 00:05:14.701 "log_clear_flag", 00:05:14.701 "log_set_flag", 00:05:14.701 "log_get_level", 00:05:14.701 "log_set_level", 00:05:14.701 "log_get_print_level", 00:05:14.701 "log_set_print_level", 00:05:14.701 "framework_enable_cpumask_locks", 00:05:14.701 "framework_disable_cpumask_locks", 00:05:14.702 "framework_wait_init", 00:05:14.702 "framework_start_init", 00:05:14.702 "scsi_get_devices", 00:05:14.702 "bdev_get_histogram", 00:05:14.702 "bdev_enable_histogram", 00:05:14.702 "bdev_set_qos_limit", 00:05:14.702 "bdev_set_qd_sampling_period", 00:05:14.702 "bdev_get_bdevs", 00:05:14.702 "bdev_reset_iostat", 00:05:14.702 "bdev_get_iostat", 00:05:14.702 "bdev_examine", 00:05:14.702 "bdev_wait_for_examine", 00:05:14.702 "bdev_set_options", 00:05:14.702 "accel_get_stats", 00:05:14.702 "accel_set_options", 00:05:14.702 "accel_set_driver", 00:05:14.702 "accel_crypto_key_destroy", 00:05:14.702 "accel_crypto_keys_get", 00:05:14.702 "accel_crypto_key_create", 00:05:14.702 "accel_assign_opc", 00:05:14.702 "accel_get_module_info", 00:05:14.702 "accel_get_opc_assignments", 00:05:14.702 "vmd_rescan", 00:05:14.702 "vmd_remove_device", 00:05:14.702 "vmd_enable", 00:05:14.702 "sock_get_default_impl", 00:05:14.702 "sock_set_default_impl", 00:05:14.702 "sock_impl_set_options", 00:05:14.702 "sock_impl_get_options", 00:05:14.702 "iobuf_get_stats", 00:05:14.702 "iobuf_set_options", 00:05:14.702 "keyring_get_keys", 00:05:14.702 "framework_get_pci_devices", 00:05:14.702 "framework_get_config", 00:05:14.702 "framework_get_subsystems", 00:05:14.702 "fsdev_set_opts", 00:05:14.702 "fsdev_get_opts", 00:05:14.702 "trace_get_info", 00:05:14.702 "trace_get_tpoint_group_mask", 00:05:14.702 "trace_disable_tpoint_group", 00:05:14.702 "trace_enable_tpoint_group", 00:05:14.702 "trace_clear_tpoint_mask", 00:05:14.702 "trace_set_tpoint_mask", 00:05:14.702 "notify_get_notifications", 00:05:14.702 "notify_get_types", 00:05:14.702 "spdk_get_version", 00:05:14.702 "rpc_get_methods" 00:05:14.702 ] 00:05:14.702 13:03:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.702 13:03:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.702 13:03:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.702 13:03:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.702 13:03:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59999 00:05:14.702 13:03:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59999 ']' 00:05:14.702 13:03:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59999 00:05:14.702 13:03:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59999 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.964 killing process with pid 59999 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59999' 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59999 00:05:14.964 13:03:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59999 00:05:17.514 00:05:17.514 real 0m4.726s 00:05:17.514 user 0m8.205s 00:05:17.514 sys 0m0.877s 00:05:17.514 ************************************ 00:05:17.514 END TEST spdkcli_tcp 00:05:17.514 ************************************ 00:05:17.514 13:03:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.514 13:03:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.514 13:03:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.514 13:03:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.514 13:03:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.514 13:03:09 -- common/autotest_common.sh@10 -- # set +x 00:05:17.514 ************************************ 00:05:17.514 START TEST dpdk_mem_utility 00:05:17.514 ************************************ 00:05:17.514 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.774 * Looking for test storage... 00:05:17.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.775 13:03:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.775 --rc genhtml_branch_coverage=1 00:05:17.775 --rc genhtml_function_coverage=1 00:05:17.775 --rc genhtml_legend=1 00:05:17.775 --rc geninfo_all_blocks=1 00:05:17.775 --rc geninfo_unexecuted_blocks=1 00:05:17.775 00:05:17.775 ' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.775 --rc genhtml_branch_coverage=1 00:05:17.775 --rc genhtml_function_coverage=1 00:05:17.775 --rc genhtml_legend=1 00:05:17.775 --rc geninfo_all_blocks=1 00:05:17.775 --rc geninfo_unexecuted_blocks=1 00:05:17.775 00:05:17.775 ' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.775 --rc genhtml_branch_coverage=1 00:05:17.775 --rc genhtml_function_coverage=1 00:05:17.775 --rc genhtml_legend=1 00:05:17.775 --rc geninfo_all_blocks=1 00:05:17.775 --rc geninfo_unexecuted_blocks=1 00:05:17.775 00:05:17.775 ' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.775 --rc genhtml_branch_coverage=1 00:05:17.775 --rc genhtml_function_coverage=1 00:05:17.775 --rc genhtml_legend=1 00:05:17.775 --rc geninfo_all_blocks=1 00:05:17.775 --rc geninfo_unexecuted_blocks=1 00:05:17.775 00:05:17.775 ' 00:05:17.775 13:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.775 13:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60127 00:05:17.775 13:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.775 13:03:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60127 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60127 ']' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.775 13:03:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.034 [2024-12-11 13:03:09.423730] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:18.034 [2024-12-11 13:03:09.423898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60127 ] 00:05:18.293 [2024-12-11 13:03:09.609670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.293 [2024-12-11 13:03:09.754405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.230 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.230 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:19.230 13:03:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:19.230 13:03:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:19.230 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.230 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 { 00:05:19.230 "filename": "/tmp/spdk_mem_dump.txt" 00:05:19.230 } 00:05:19.230 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.230 13:03:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:19.491 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:19.491 1 heaps totaling size 824.000000 MiB 00:05:19.491 size: 824.000000 MiB heap id: 0 00:05:19.491 end heaps---------- 00:05:19.491 9 mempools totaling size 603.782043 MiB 00:05:19.491 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.491 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.491 size: 100.555481 MiB name: bdev_io_60127 00:05:19.491 size: 50.003479 MiB name: msgpool_60127 00:05:19.491 size: 36.509338 MiB name: fsdev_io_60127 00:05:19.491 size: 21.763794 MiB name: PDU_Pool 00:05:19.491 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.491 size: 4.133484 MiB name: evtpool_60127 00:05:19.491 size: 0.026123 MiB name: Session_Pool 00:05:19.491 end mempools------- 00:05:19.491 6 memzones totaling size 4.142822 MiB 00:05:19.491 size: 1.000366 MiB name: RG_ring_0_60127 00:05:19.491 size: 1.000366 MiB name: RG_ring_1_60127 00:05:19.491 size: 1.000366 MiB name: RG_ring_4_60127 00:05:19.491 size: 1.000366 MiB name: RG_ring_5_60127 00:05:19.491 size: 0.125366 MiB name: RG_ring_2_60127 00:05:19.491 size: 0.015991 MiB name: RG_ring_3_60127 00:05:19.491 end memzones------- 00:05:19.491 13:03:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.491 heap id: 0 total size: 824.000000 MiB number of busy elements: 317 number of free elements: 18 00:05:19.491 list of free elements. size: 16.780884 MiB 00:05:19.491 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:19.491 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:19.491 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:19.491 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:19.491 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:19.491 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:19.491 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:19.491 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:19.491 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:19.491 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:19.491 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:19.491 element at address: 0x20001b400000 with size: 0.562439 MiB 00:05:19.491 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:19.491 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:19.491 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:19.491 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:19.491 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:19.491 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:19.491 list of standard malloc elements. size: 199.288208 MiB 00:05:19.491 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:19.491 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:19.491 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:19.491 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:19.491 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:19.491 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:19.491 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:19.492 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:19.492 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:19.492 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:19.492 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:19.492 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:19.492 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:19.493 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:19.493 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:19.493 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:19.493 list of memzone associated elements. size: 607.930908 MiB 00:05:19.493 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:19.493 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.493 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:19.493 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.494 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:19.494 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60127_0 00:05:19.494 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:19.494 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60127_0 00:05:19.494 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:19.494 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60127_0 00:05:19.494 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:19.494 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.494 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:19.494 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.494 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:19.494 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60127_0 00:05:19.494 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:19.494 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60127 00:05:19.494 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:19.494 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60127 00:05:19.494 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:19.494 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.494 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:19.494 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.494 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:19.494 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.494 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:19.494 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.494 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:19.494 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60127 00:05:19.494 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:19.494 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60127 00:05:19.494 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:19.494 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60127 00:05:19.494 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:19.494 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60127 00:05:19.494 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:19.494 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60127 00:05:19.494 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:19.494 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60127 00:05:19.494 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:19.494 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.494 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:19.494 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.494 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:19.494 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.494 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:19.494 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60127 00:05:19.494 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:19.494 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60127 00:05:19.494 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:19.494 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.494 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:19.494 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.494 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:19.494 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60127 00:05:19.494 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:19.494 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.494 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:19.494 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60127 00:05:19.494 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:19.494 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60127 00:05:19.494 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:19.494 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60127 00:05:19.494 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:19.494 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.494 13:03:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.494 13:03:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60127 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60127 ']' 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60127 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60127 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.494 killing process with pid 60127 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60127' 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60127 00:05:19.494 13:03:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60127 00:05:22.784 00:05:22.784 real 0m4.519s 00:05:22.784 user 0m4.216s 00:05:22.784 sys 0m0.779s 00:05:22.784 13:03:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.784 ************************************ 00:05:22.784 END TEST dpdk_mem_utility 00:05:22.784 13:03:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.784 ************************************ 00:05:22.784 13:03:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:22.784 13:03:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.784 13:03:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.784 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:05:22.784 ************************************ 00:05:22.784 START TEST event 00:05:22.784 ************************************ 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:22.784 * Looking for test storage... 00:05:22.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.784 13:03:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.784 13:03:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.784 13:03:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.784 13:03:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.784 13:03:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.784 13:03:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.784 13:03:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.784 13:03:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.784 13:03:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.784 13:03:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.784 13:03:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.784 13:03:13 event -- scripts/common.sh@344 -- # case "$op" in 00:05:22.784 13:03:13 event -- scripts/common.sh@345 -- # : 1 00:05:22.784 13:03:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.784 13:03:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.784 13:03:13 event -- scripts/common.sh@365 -- # decimal 1 00:05:22.784 13:03:13 event -- scripts/common.sh@353 -- # local d=1 00:05:22.784 13:03:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.784 13:03:13 event -- scripts/common.sh@355 -- # echo 1 00:05:22.784 13:03:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.784 13:03:13 event -- scripts/common.sh@366 -- # decimal 2 00:05:22.784 13:03:13 event -- scripts/common.sh@353 -- # local d=2 00:05:22.784 13:03:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.784 13:03:13 event -- scripts/common.sh@355 -- # echo 2 00:05:22.784 13:03:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.784 13:03:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.784 13:03:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.784 13:03:13 event -- scripts/common.sh@368 -- # return 0 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.784 --rc genhtml_branch_coverage=1 00:05:22.784 --rc genhtml_function_coverage=1 00:05:22.784 --rc genhtml_legend=1 00:05:22.784 --rc geninfo_all_blocks=1 00:05:22.784 --rc geninfo_unexecuted_blocks=1 00:05:22.784 00:05:22.784 ' 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.784 --rc genhtml_branch_coverage=1 00:05:22.784 --rc genhtml_function_coverage=1 00:05:22.784 --rc genhtml_legend=1 00:05:22.784 --rc geninfo_all_blocks=1 00:05:22.784 --rc geninfo_unexecuted_blocks=1 00:05:22.784 00:05:22.784 ' 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.784 --rc genhtml_branch_coverage=1 00:05:22.784 --rc genhtml_function_coverage=1 00:05:22.784 --rc genhtml_legend=1 00:05:22.784 --rc geninfo_all_blocks=1 00:05:22.784 --rc geninfo_unexecuted_blocks=1 00:05:22.784 00:05:22.784 ' 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.784 --rc genhtml_branch_coverage=1 00:05:22.784 --rc genhtml_function_coverage=1 00:05:22.784 --rc genhtml_legend=1 00:05:22.784 --rc geninfo_all_blocks=1 00:05:22.784 --rc geninfo_unexecuted_blocks=1 00:05:22.784 00:05:22.784 ' 00:05:22.784 13:03:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:22.784 13:03:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:22.784 13:03:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:22.784 13:03:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.784 13:03:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.784 ************************************ 00:05:22.784 START TEST event_perf 00:05:22.784 ************************************ 00:05:22.784 13:03:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.784 Running I/O for 1 seconds...[2024-12-11 13:03:13.961166] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:22.784 [2024-12-11 13:03:13.961283] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60239 ] 00:05:22.784 [2024-12-11 13:03:14.146877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.784 [2024-12-11 13:03:14.299898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.784 [2024-12-11 13:03:14.300098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.784 [2024-12-11 13:03:14.300141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.784 [2024-12-11 13:03:14.300141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.167 Running I/O for 1 seconds... 00:05:24.167 lcore 0: 85350 00:05:24.167 lcore 1: 85354 00:05:24.167 lcore 2: 85357 00:05:24.167 lcore 3: 85346 00:05:24.167 done. 00:05:24.167 00:05:24.167 real 0m1.651s 00:05:24.167 user 0m4.387s 00:05:24.167 sys 0m0.144s 00:05:24.167 13:03:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.167 13:03:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.167 ************************************ 00:05:24.167 END TEST event_perf 00:05:24.167 ************************************ 00:05:24.167 13:03:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:24.167 13:03:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:24.167 13:03:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.167 13:03:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.167 ************************************ 00:05:24.167 START TEST event_reactor 00:05:24.167 ************************************ 00:05:24.167 13:03:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:24.167 [2024-12-11 13:03:15.687518] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:24.167 [2024-12-11 13:03:15.687636] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 00:05:24.426 [2024-12-11 13:03:15.870386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.685 [2024-12-11 13:03:16.010797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.064 test_start 00:05:26.064 oneshot 00:05:26.064 tick 100 00:05:26.064 tick 100 00:05:26.064 tick 250 00:05:26.064 tick 100 00:05:26.064 tick 100 00:05:26.064 tick 100 00:05:26.064 tick 250 00:05:26.064 tick 500 00:05:26.064 tick 100 00:05:26.064 tick 100 00:05:26.064 tick 250 00:05:26.064 tick 100 00:05:26.064 tick 100 00:05:26.064 test_end 00:05:26.064 00:05:26.064 real 0m1.614s 00:05:26.064 user 0m1.395s 00:05:26.064 sys 0m0.111s 00:05:26.065 13:03:17 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.065 13:03:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.065 ************************************ 00:05:26.065 END TEST event_reactor 00:05:26.065 ************************************ 00:05:26.065 13:03:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.065 13:03:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:26.065 13:03:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.065 13:03:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.065 ************************************ 00:05:26.065 START TEST event_reactor_perf 00:05:26.065 ************************************ 00:05:26.065 13:03:17 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.065 [2024-12-11 13:03:17.373904] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:26.065 [2024-12-11 13:03:17.374022] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:05:26.065 [2024-12-11 13:03:17.557737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.324 [2024-12-11 13:03:17.691983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.703 test_start 00:05:27.703 test_end 00:05:27.703 Performance: 383412 events per second 00:05:27.703 00:05:27.703 real 0m1.620s 00:05:27.703 user 0m1.374s 00:05:27.703 sys 0m0.137s 00:05:27.703 13:03:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.703 13:03:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.703 ************************************ 00:05:27.703 END TEST event_reactor_perf 00:05:27.703 ************************************ 00:05:27.703 13:03:19 event -- event/event.sh@49 -- # uname -s 00:05:27.703 13:03:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.703 13:03:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:27.703 13:03:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.703 13:03:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.703 13:03:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.703 ************************************ 00:05:27.703 START TEST event_scheduler 00:05:27.703 ************************************ 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:27.703 * Looking for test storage... 00:05:27.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.703 13:03:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.703 --rc genhtml_branch_coverage=1 00:05:27.703 --rc genhtml_function_coverage=1 00:05:27.703 --rc genhtml_legend=1 00:05:27.703 --rc geninfo_all_blocks=1 00:05:27.703 --rc geninfo_unexecuted_blocks=1 00:05:27.703 00:05:27.703 ' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.703 --rc genhtml_branch_coverage=1 00:05:27.703 --rc genhtml_function_coverage=1 00:05:27.703 --rc genhtml_legend=1 00:05:27.703 --rc geninfo_all_blocks=1 00:05:27.703 --rc geninfo_unexecuted_blocks=1 00:05:27.703 00:05:27.703 ' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.703 --rc genhtml_branch_coverage=1 00:05:27.703 --rc genhtml_function_coverage=1 00:05:27.703 --rc genhtml_legend=1 00:05:27.703 --rc geninfo_all_blocks=1 00:05:27.703 --rc geninfo_unexecuted_blocks=1 00:05:27.703 00:05:27.703 ' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.703 --rc genhtml_branch_coverage=1 00:05:27.703 --rc genhtml_function_coverage=1 00:05:27.703 --rc genhtml_legend=1 00:05:27.703 --rc geninfo_all_blocks=1 00:05:27.703 --rc geninfo_unexecuted_blocks=1 00:05:27.703 00:05:27.703 ' 00:05:27.703 13:03:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.703 13:03:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60392 00:05:27.703 13:03:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.703 13:03:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.703 13:03:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60392 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60392 ']' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.703 13:03:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.963 [2024-12-11 13:03:19.352672] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:27.963 [2024-12-11 13:03:19.352812] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60392 ] 00:05:28.221 [2024-12-11 13:03:19.538572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.222 [2024-12-11 13:03:19.661844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.222 [2024-12-11 13:03:19.662026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.222 [2024-12-11 13:03:19.662218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.222 [2024-12-11 13:03:19.662291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.790 13:03:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.790 13:03:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:28.790 13:03:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.790 13:03:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.790 13:03:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.790 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.790 POWER: Cannot set governor of lcore 0 to performance 00:05:28.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.790 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.791 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.791 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.791 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:28.791 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:28.791 POWER: Unable to set Power Management Environment for lcore 0 00:05:28.791 [2024-12-11 13:03:20.215545] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:28.791 [2024-12-11 13:03:20.215574] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:28.791 [2024-12-11 13:03:20.215587] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:28.791 [2024-12-11 13:03:20.215610] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.791 [2024-12-11 13:03:20.215621] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.791 [2024-12-11 13:03:20.215638] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.791 13:03:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.791 13:03:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.791 13:03:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.791 13:03:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.050 [2024-12-11 13:03:20.560505] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:29.050 13:03:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.050 13:03:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:29.050 13:03:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.050 13:03:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.050 13:03:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.050 ************************************ 00:05:29.050 START TEST scheduler_create_thread 00:05:29.050 ************************************ 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.050 2 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.050 3 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.050 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.309 4 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.309 5 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.309 6 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.309 7 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.309 8 00:05:29.309 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.310 9 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.310 10 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.310 13:03:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.745 13:03:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.745 13:03:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.745 13:03:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.745 13:03:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.745 13:03:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.313 13:03:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.313 13:03:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.313 13:03:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.313 13:03:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.250 13:03:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.250 13:03:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.250 13:03:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.250 13:03:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.250 13:03:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.187 ************************************ 00:05:33.187 END TEST scheduler_create_thread 00:05:33.187 ************************************ 00:05:33.187 13:03:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.187 00:05:33.187 real 0m3.882s 00:05:33.187 user 0m0.027s 00:05:33.187 sys 0m0.007s 00:05:33.187 13:03:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.187 13:03:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.187 13:03:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.187 13:03:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60392 00:05:33.187 13:03:24 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60392 ']' 00:05:33.187 13:03:24 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60392 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60392 00:05:33.188 killing process with pid 60392 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60392' 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60392 00:05:33.188 13:03:24 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60392 00:05:33.447 [2024-12-11 13:03:24.838868] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.832 ************************************ 00:05:34.832 END TEST event_scheduler 00:05:34.832 ************************************ 00:05:34.832 00:05:34.832 real 0m6.988s 00:05:34.832 user 0m14.363s 00:05:34.832 sys 0m0.586s 00:05:34.832 13:03:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.832 13:03:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.832 13:03:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.832 13:03:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.832 13:03:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.832 13:03:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.832 13:03:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.832 ************************************ 00:05:34.832 START TEST app_repeat 00:05:34.832 ************************************ 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.832 Process app_repeat pid: 60515 00:05:34.832 spdk_app_start Round 0 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60515 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60515' 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.832 13:03:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60515 /var/tmp/spdk-nbd.sock 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60515 ']' 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.832 13:03:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.832 [2024-12-11 13:03:26.168977] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:34.832 [2024-12-11 13:03:26.169089] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60515 ] 00:05:34.832 [2024-12-11 13:03:26.350190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.091 [2024-12-11 13:03:26.492473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.091 [2024-12-11 13:03:26.492507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.659 13:03:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.659 13:03:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.659 13:03:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.919 Malloc0 00:05:35.919 13:03:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.178 Malloc1 00:05:36.178 13:03:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.178 13:03:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.437 /dev/nbd0 00:05:36.437 13:03:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.437 13:03:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.437 1+0 records in 00:05:36.437 1+0 records out 00:05:36.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003722 s, 11.0 MB/s 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.437 13:03:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.437 13:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.437 13:03:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.437 13:03:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.696 /dev/nbd1 00:05:36.696 13:03:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.696 13:03:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.696 13:03:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.697 1+0 records in 00:05:36.697 1+0 records out 00:05:36.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412994 s, 9.9 MB/s 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.697 13:03:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.697 13:03:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.697 13:03:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.697 13:03:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.697 13:03:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.697 13:03:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.956 { 00:05:36.956 "nbd_device": "/dev/nbd0", 00:05:36.956 "bdev_name": "Malloc0" 00:05:36.956 }, 00:05:36.956 { 00:05:36.956 "nbd_device": "/dev/nbd1", 00:05:36.956 "bdev_name": "Malloc1" 00:05:36.956 } 00:05:36.956 ]' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.956 { 00:05:36.956 "nbd_device": "/dev/nbd0", 00:05:36.956 "bdev_name": "Malloc0" 00:05:36.956 }, 00:05:36.956 { 00:05:36.956 "nbd_device": "/dev/nbd1", 00:05:36.956 "bdev_name": "Malloc1" 00:05:36.956 } 00:05:36.956 ]' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.956 /dev/nbd1' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.956 /dev/nbd1' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.956 256+0 records in 00:05:36.956 256+0 records out 00:05:36.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531786 s, 197 MB/s 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.956 256+0 records in 00:05:36.956 256+0 records out 00:05:36.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303506 s, 34.5 MB/s 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.956 256+0 records in 00:05:36.956 256+0 records out 00:05:36.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328072 s, 32.0 MB/s 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.956 13:03:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.957 13:03:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.216 13:03:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.475 13:03:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.734 13:03:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.734 13:03:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.302 13:03:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.682 [2024-12-11 13:03:30.955769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.682 [2024-12-11 13:03:31.082895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.682 [2024-12-11 13:03:31.082895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.944 [2024-12-11 13:03:31.314850] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.944 [2024-12-11 13:03:31.314930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.323 spdk_app_start Round 1 00:05:41.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.323 13:03:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.323 13:03:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.323 13:03:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60515 /var/tmp/spdk-nbd.sock 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60515 ']' 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.323 13:03:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.323 13:03:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.582 Malloc0 00:05:41.843 13:03:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.105 Malloc1 00:05:42.105 13:03:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.105 13:03:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.105 /dev/nbd0 00:05:42.364 13:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.364 13:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.364 1+0 records in 00:05:42.364 1+0 records out 00:05:42.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203578 s, 20.1 MB/s 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.364 13:03:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.364 13:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.364 13:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.364 13:03:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.364 /dev/nbd1 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.624 1+0 records in 00:05:42.624 1+0 records out 00:05:42.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034574 s, 11.8 MB/s 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.624 13:03:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.624 13:03:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.624 13:03:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.624 { 00:05:42.624 "nbd_device": "/dev/nbd0", 00:05:42.624 "bdev_name": "Malloc0" 00:05:42.624 }, 00:05:42.624 { 00:05:42.624 "nbd_device": "/dev/nbd1", 00:05:42.624 "bdev_name": "Malloc1" 00:05:42.624 } 00:05:42.624 ]' 00:05:42.624 13:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.624 13:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.624 { 00:05:42.624 "nbd_device": "/dev/nbd0", 00:05:42.624 "bdev_name": "Malloc0" 00:05:42.624 }, 00:05:42.624 { 00:05:42.624 "nbd_device": "/dev/nbd1", 00:05:42.624 "bdev_name": "Malloc1" 00:05:42.624 } 00:05:42.624 ]' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.883 /dev/nbd1' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.883 /dev/nbd1' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.883 256+0 records in 00:05:42.883 256+0 records out 00:05:42.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622448 s, 168 MB/s 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.883 256+0 records in 00:05:42.883 256+0 records out 00:05:42.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281515 s, 37.2 MB/s 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.883 256+0 records in 00:05:42.883 256+0 records out 00:05:42.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029926 s, 35.0 MB/s 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.883 13:03:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.142 13:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.142 13:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.143 13:03:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.402 13:03:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.661 13:03:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.661 13:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.661 13:03:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.661 13:03:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.661 13:03:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.920 13:03:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.298 [2024-12-11 13:03:36.786172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.557 [2024-12-11 13:03:36.915764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.557 [2024-12-11 13:03:36.915785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.816 [2024-12-11 13:03:37.148631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.816 [2024-12-11 13:03:37.148699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.206 spdk_app_start Round 2 00:05:47.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.207 13:03:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.207 13:03:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.207 13:03:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60515 /var/tmp/spdk-nbd.sock 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60515 ']' 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.207 13:03:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.207 13:03:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.466 Malloc0 00:05:47.466 13:03:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.725 Malloc1 00:05:47.725 13:03:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.725 13:03:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.726 13:03:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.726 13:03:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.726 13:03:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.726 13:03:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.984 /dev/nbd0 00:05:47.984 13:03:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.984 13:03:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.984 1+0 records in 00:05:47.984 1+0 records out 00:05:47.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291133 s, 14.1 MB/s 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.984 13:03:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.985 13:03:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.985 13:03:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.985 13:03:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.985 13:03:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.985 13:03:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.985 13:03:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.243 /dev/nbd1 00:05:48.243 13:03:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.243 13:03:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.243 1+0 records in 00:05:48.243 1+0 records out 00:05:48.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458101 s, 8.9 MB/s 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.243 13:03:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.243 13:03:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.243 13:03:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.243 13:03:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.243 13:03:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.244 13:03:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.503 13:03:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.503 { 00:05:48.503 "nbd_device": "/dev/nbd0", 00:05:48.503 "bdev_name": "Malloc0" 00:05:48.503 }, 00:05:48.503 { 00:05:48.503 "nbd_device": "/dev/nbd1", 00:05:48.504 "bdev_name": "Malloc1" 00:05:48.504 } 00:05:48.504 ]' 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.504 { 00:05:48.504 "nbd_device": "/dev/nbd0", 00:05:48.504 "bdev_name": "Malloc0" 00:05:48.504 }, 00:05:48.504 { 00:05:48.504 "nbd_device": "/dev/nbd1", 00:05:48.504 "bdev_name": "Malloc1" 00:05:48.504 } 00:05:48.504 ]' 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.504 /dev/nbd1' 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.504 /dev/nbd1' 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.504 13:03:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.763 256+0 records in 00:05:48.763 256+0 records out 00:05:48.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137413 s, 76.3 MB/s 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.763 256+0 records in 00:05:48.763 256+0 records out 00:05:48.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240163 s, 43.7 MB/s 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.763 256+0 records in 00:05:48.763 256+0 records out 00:05:48.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352626 s, 29.7 MB/s 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.763 13:03:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.023 13:03:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.282 13:03:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.541 13:03:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.542 13:03:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.801 13:03:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.179 [2024-12-11 13:03:42.659069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.438 [2024-12-11 13:03:42.792717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.438 [2024-12-11 13:03:42.792718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.697 [2024-12-11 13:03:43.024615] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.697 [2024-12-11 13:03:43.024717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.076 13:03:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60515 /var/tmp/spdk-nbd.sock 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60515 ']' 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.076 13:03:44 event.app_repeat -- event/event.sh@39 -- # killprocess 60515 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60515 ']' 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60515 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60515 00:05:53.076 killing process with pid 60515 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60515' 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60515 00:05:53.076 13:03:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60515 00:05:54.455 spdk_app_start is called in Round 0. 00:05:54.455 Shutdown signal received, stop current app iteration 00:05:54.455 Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 reinitialization... 00:05:54.455 spdk_app_start is called in Round 1. 00:05:54.455 Shutdown signal received, stop current app iteration 00:05:54.455 Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 reinitialization... 00:05:54.455 spdk_app_start is called in Round 2. 00:05:54.455 Shutdown signal received, stop current app iteration 00:05:54.455 Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 reinitialization... 00:05:54.455 spdk_app_start is called in Round 3. 00:05:54.455 Shutdown signal received, stop current app iteration 00:05:54.455 13:03:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.455 ************************************ 00:05:54.455 END TEST app_repeat 00:05:54.455 ************************************ 00:05:54.455 13:03:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.455 00:05:54.455 real 0m19.709s 00:05:54.455 user 0m41.162s 00:05:54.455 sys 0m3.627s 00:05:54.455 13:03:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.455 13:03:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.455 13:03:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.455 13:03:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.455 13:03:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.455 13:03:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.455 13:03:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.455 ************************************ 00:05:54.455 START TEST cpu_locks 00:05:54.455 ************************************ 00:05:54.455 13:03:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.455 * Looking for test storage... 00:05:54.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.715 13:03:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.715 --rc genhtml_branch_coverage=1 00:05:54.715 --rc genhtml_function_coverage=1 00:05:54.715 --rc genhtml_legend=1 00:05:54.715 --rc geninfo_all_blocks=1 00:05:54.715 --rc geninfo_unexecuted_blocks=1 00:05:54.715 00:05:54.715 ' 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.715 --rc genhtml_branch_coverage=1 00:05:54.715 --rc genhtml_function_coverage=1 00:05:54.715 --rc genhtml_legend=1 00:05:54.715 --rc geninfo_all_blocks=1 00:05:54.715 --rc geninfo_unexecuted_blocks=1 00:05:54.715 00:05:54.715 ' 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.715 --rc genhtml_branch_coverage=1 00:05:54.715 --rc genhtml_function_coverage=1 00:05:54.715 --rc genhtml_legend=1 00:05:54.715 --rc geninfo_all_blocks=1 00:05:54.715 --rc geninfo_unexecuted_blocks=1 00:05:54.715 00:05:54.715 ' 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.715 --rc genhtml_branch_coverage=1 00:05:54.715 --rc genhtml_function_coverage=1 00:05:54.715 --rc genhtml_legend=1 00:05:54.715 --rc geninfo_all_blocks=1 00:05:54.715 --rc geninfo_unexecuted_blocks=1 00:05:54.715 00:05:54.715 ' 00:05:54.715 13:03:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.715 13:03:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.715 13:03:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.715 13:03:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.715 13:03:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.716 13:03:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.716 ************************************ 00:05:54.716 START TEST default_locks 00:05:54.716 ************************************ 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60964 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60964 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60964 ']' 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.716 13:03:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.716 [2024-12-11 13:03:46.260244] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:54.716 [2024-12-11 13:03:46.260392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60964 ] 00:05:54.975 [2024-12-11 13:03:46.448472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.234 [2024-12-11 13:03:46.586919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.172 13:03:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.172 13:03:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:56.172 13:03:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60964 00:05:56.172 13:03:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60964 00:05:56.172 13:03:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60964 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60964 ']' 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60964 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60964 00:05:56.740 killing process with pid 60964 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60964' 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60964 00:05:56.740 13:03:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60964 00:05:59.276 13:03:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60964 00:05:59.276 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:59.276 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60964 00:05:59.276 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.276 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.276 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.536 ERROR: process (pid: 60964) is no longer running 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60964 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60964 ']' 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60964) - No such process 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.536 00:05:59.536 real 0m4.710s 00:05:59.536 user 0m4.477s 00:05:59.536 sys 0m0.894s 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.536 13:03:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.536 ************************************ 00:05:59.536 END TEST default_locks 00:05:59.536 ************************************ 00:05:59.536 13:03:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.536 13:03:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.536 13:03:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.536 13:03:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.536 ************************************ 00:05:59.536 START TEST default_locks_via_rpc 00:05:59.536 ************************************ 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61046 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61046 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61046 ']' 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.536 13:03:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.536 [2024-12-11 13:03:51.042004] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:05:59.536 [2024-12-11 13:03:51.042319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61046 ] 00:05:59.795 [2024-12-11 13:03:51.218051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.795 [2024-12-11 13:03:51.356389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61046 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61046 00:06:01.175 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61046 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61046 ']' 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61046 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61046 00:06:01.433 killing process with pid 61046 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61046' 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61046 00:06:01.433 13:03:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61046 00:06:04.724 00:06:04.724 real 0m4.749s 00:06:04.724 user 0m4.521s 00:06:04.724 sys 0m0.913s 00:06:04.724 13:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.724 ************************************ 00:06:04.724 END TEST default_locks_via_rpc 00:06:04.724 13:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.724 ************************************ 00:06:04.724 13:03:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:04.724 13:03:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.724 13:03:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.724 13:03:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.724 ************************************ 00:06:04.724 START TEST non_locking_app_on_locked_coremask 00:06:04.724 ************************************ 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61133 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61133 /var/tmp/spdk.sock 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61133 ']' 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.724 13:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.724 [2024-12-11 13:03:55.859765] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:04.724 [2024-12-11 13:03:55.859883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:06:04.724 [2024-12-11 13:03:56.039187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.724 [2024-12-11 13:03:56.171547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61149 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61149 /var/tmp/spdk2.sock 00:06:05.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61149 ']' 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.663 13:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 [2024-12-11 13:03:57.328571] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:05.922 [2024-12-11 13:03:57.328704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61149 ] 00:06:06.181 [2024-12-11 13:03:57.519239] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.181 [2024-12-11 13:03:57.519298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.440 [2024-12-11 13:03:57.817765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.346 13:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.346 13:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.346 13:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61133 00:06:08.346 13:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61133 00:06:08.346 13:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61133 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61133 ']' 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61133 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61133 00:06:09.283 killing process with pid 61133 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61133' 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61133 00:06:09.283 13:04:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61133 00:06:14.555 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61149 00:06:14.555 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61149 ']' 00:06:14.555 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61149 00:06:14.555 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.555 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.555 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61149 00:06:14.814 killing process with pid 61149 00:06:14.814 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.814 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.814 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61149' 00:06:14.814 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61149 00:06:14.814 13:04:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61149 00:06:17.351 ************************************ 00:06:17.351 END TEST non_locking_app_on_locked_coremask 00:06:17.351 ************************************ 00:06:17.351 00:06:17.351 real 0m13.033s 00:06:17.351 user 0m12.880s 00:06:17.351 sys 0m1.857s 00:06:17.351 13:04:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.351 13:04:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.351 13:04:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:17.351 13:04:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.351 13:04:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.351 13:04:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.351 ************************************ 00:06:17.351 START TEST locking_app_on_unlocked_coremask 00:06:17.351 ************************************ 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61310 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61310 /var/tmp/spdk.sock 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61310 ']' 00:06:17.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.351 13:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.610 [2024-12-11 13:04:08.971858] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:17.610 [2024-12-11 13:04:08.971971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61310 ] 00:06:17.610 [2024-12-11 13:04:09.154662] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.610 [2024-12-11 13:04:09.154863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.869 [2024-12-11 13:04:09.277441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61337 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61337 /var/tmp/spdk2.sock 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61337 ']' 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.806 13:04:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.065 [2024-12-11 13:04:10.426797] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:19.066 [2024-12-11 13:04:10.427168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61337 ] 00:06:19.066 [2024-12-11 13:04:10.608692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.633 [2024-12-11 13:04:10.896580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.537 13:04:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.537 13:04:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.537 13:04:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61337 00:06:21.537 13:04:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61337 00:06:21.537 13:04:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61310 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61310 ']' 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61310 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61310 00:06:22.474 killing process with pid 61310 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61310' 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61310 00:06:22.474 13:04:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61310 00:06:27.754 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61337 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61337 ']' 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61337 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61337 00:06:27.755 killing process with pid 61337 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61337' 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61337 00:06:27.755 13:04:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61337 00:06:31.045 00:06:31.045 real 0m13.011s 00:06:31.045 user 0m12.935s 00:06:31.045 sys 0m1.756s 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.045 ************************************ 00:06:31.045 END TEST locking_app_on_unlocked_coremask 00:06:31.045 ************************************ 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.045 13:04:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:31.045 13:04:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.045 13:04:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.045 13:04:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.045 ************************************ 00:06:31.045 START TEST locking_app_on_locked_coremask 00:06:31.045 ************************************ 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61497 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61497 /var/tmp/spdk.sock 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61497 ']' 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.045 13:04:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.045 [2024-12-11 13:04:22.071818] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:31.045 [2024-12-11 13:04:22.072172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:06:31.045 [2024-12-11 13:04:22.260880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.045 [2024-12-11 13:04:22.399345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61513 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61513 /var/tmp/spdk2.sock 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61513 /var/tmp/spdk2.sock 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61513 /var/tmp/spdk2.sock 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61513 ']' 00:06:31.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.983 13:04:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.983 [2024-12-11 13:04:23.542157] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:31.983 [2024-12-11 13:04:23.542348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61513 ] 00:06:32.242 [2024-12-11 13:04:23.739165] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61497 has claimed it. 00:06:32.242 [2024-12-11 13:04:23.739269] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.811 ERROR: process (pid: 61513) is no longer running 00:06:32.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61513) - No such process 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61497 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61497 00:06:32.811 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61497 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61497 ']' 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61497 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61497 00:06:33.379 killing process with pid 61497 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61497' 00:06:33.379 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61497 00:06:33.380 13:04:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61497 00:06:35.915 00:06:35.915 real 0m5.522s 00:06:35.915 user 0m5.529s 00:06:35.915 sys 0m1.094s 00:06:35.915 13:04:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.915 ************************************ 00:06:35.915 END TEST locking_app_on_locked_coremask 00:06:35.915 ************************************ 00:06:35.915 13:04:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.175 13:04:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.175 13:04:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.175 13:04:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.175 13:04:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.175 ************************************ 00:06:36.175 START TEST locking_overlapped_coremask 00:06:36.175 ************************************ 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61588 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61588 /var/tmp/spdk.sock 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61588 ']' 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.175 13:04:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.175 [2024-12-11 13:04:27.668813] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:36.175 [2024-12-11 13:04:27.668940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61588 ] 00:06:36.434 [2024-12-11 13:04:27.853986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.694 [2024-12-11 13:04:28.006027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.694 [2024-12-11 13:04:28.006228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.694 [2024-12-11 13:04:28.006279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61612 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61612 /var/tmp/spdk2.sock 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61612 /var/tmp/spdk2.sock 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61612 /var/tmp/spdk2.sock 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61612 ']' 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.632 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.632 [2024-12-11 13:04:29.188665] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:37.632 [2024-12-11 13:04:29.188817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61612 ] 00:06:37.891 [2024-12-11 13:04:29.378604] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61588 has claimed it. 00:06:37.892 [2024-12-11 13:04:29.378677] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.460 ERROR: process (pid: 61612) is no longer running 00:06:38.460 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61612) - No such process 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61588 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61588 ']' 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61588 00:06:38.460 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61588 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61588' 00:06:38.461 killing process with pid 61588 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61588 00:06:38.461 13:04:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61588 00:06:41.751 00:06:41.751 real 0m5.098s 00:06:41.751 user 0m13.659s 00:06:41.751 sys 0m0.880s 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.751 ************************************ 00:06:41.751 END TEST locking_overlapped_coremask 00:06:41.751 ************************************ 00:06:41.751 13:04:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:41.751 13:04:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.751 13:04:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.751 13:04:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.751 ************************************ 00:06:41.751 START TEST locking_overlapped_coremask_via_rpc 00:06:41.751 ************************************ 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61681 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61681 /var/tmp/spdk.sock 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61681 ']' 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.751 13:04:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.751 [2024-12-11 13:04:32.837194] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:41.751 [2024-12-11 13:04:32.837343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:06:41.751 [2024-12-11 13:04:33.018762] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.751 [2024-12-11 13:04:33.018836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.751 [2024-12-11 13:04:33.176214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.751 [2024-12-11 13:04:33.176367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.751 [2024-12-11 13:04:33.176415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61705 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61705 /var/tmp/spdk2.sock 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61705 ']' 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.131 13:04:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.131 [2024-12-11 13:04:34.400186] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:43.131 [2024-12-11 13:04:34.400603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61705 ] 00:06:43.131 [2024-12-11 13:04:34.592233] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.131 [2024-12-11 13:04:34.592300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.391 [2024-12-11 13:04:34.847696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.391 [2024-12-11 13:04:34.847756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.391 [2024-12-11 13:04:34.847792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.039 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.040 [2024-12-11 13:04:37.042405] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61681 has claimed it. 00:06:46.040 request: 00:06:46.040 { 00:06:46.040 "method": "framework_enable_cpumask_locks", 00:06:46.040 "req_id": 1 00:06:46.040 } 00:06:46.040 Got JSON-RPC error response 00:06:46.040 response: 00:06:46.040 { 00:06:46.040 "code": -32603, 00:06:46.040 "message": "Failed to claim CPU core: 2" 00:06:46.040 } 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61681 /var/tmp/spdk.sock 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61681 ']' 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61705 /var/tmp/spdk2.sock 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61705 ']' 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.040 00:06:46.040 real 0m4.805s 00:06:46.040 user 0m1.355s 00:06:46.040 sys 0m0.266s 00:06:46.040 ************************************ 00:06:46.040 END TEST locking_overlapped_coremask_via_rpc 00:06:46.040 ************************************ 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.040 13:04:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.040 13:04:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.040 13:04:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61681 ]] 00:06:46.040 13:04:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61681 00:06:46.040 13:04:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61681 ']' 00:06:46.040 13:04:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61681 00:06:46.040 13:04:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:46.040 13:04:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.040 13:04:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61681 00:06:46.298 killing process with pid 61681 00:06:46.298 13:04:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.298 13:04:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.298 13:04:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61681' 00:06:46.298 13:04:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61681 00:06:46.298 13:04:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61681 00:06:49.587 13:04:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61705 ]] 00:06:49.587 13:04:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61705 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61705 ']' 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61705 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61705 00:06:49.587 killing process with pid 61705 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61705' 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61705 00:06:49.587 13:04:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61705 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61681 ]] 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61681 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61681 ']' 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61681 00:06:52.123 Process with pid 61681 is not found 00:06:52.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61681) - No such process 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61681 is not found' 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61705 ]] 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61705 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61705 ']' 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61705 00:06:52.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61705) - No such process 00:06:52.123 Process with pid 61705 is not found 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61705 is not found' 00:06:52.123 13:04:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.123 00:06:52.123 real 0m57.207s 00:06:52.123 user 1m35.218s 00:06:52.123 sys 0m9.264s 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.123 ************************************ 00:06:52.123 END TEST cpu_locks 00:06:52.123 ************************************ 00:06:52.123 13:04:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.123 ************************************ 00:06:52.123 END TEST event 00:06:52.123 ************************************ 00:06:52.123 00:06:52.123 real 1m29.487s 00:06:52.123 user 2m38.147s 00:06:52.123 sys 0m14.314s 00:06:52.123 13:04:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.123 13:04:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.123 13:04:43 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.123 13:04:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.123 13:04:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.123 13:04:43 -- common/autotest_common.sh@10 -- # set +x 00:06:52.123 ************************************ 00:06:52.123 START TEST thread 00:06:52.123 ************************************ 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.123 * Looking for test storage... 00:06:52.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.123 13:04:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.123 13:04:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.123 13:04:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.123 13:04:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.123 13:04:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.123 13:04:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.123 13:04:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.123 13:04:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.123 13:04:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.123 13:04:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.123 13:04:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.123 13:04:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:52.123 13:04:43 thread -- scripts/common.sh@345 -- # : 1 00:06:52.123 13:04:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.123 13:04:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.123 13:04:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:52.123 13:04:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:52.123 13:04:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.123 13:04:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:52.123 13:04:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.123 13:04:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:52.123 13:04:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:52.123 13:04:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.123 13:04:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:52.123 13:04:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.123 13:04:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.123 13:04:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.123 13:04:43 thread -- scripts/common.sh@368 -- # return 0 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.123 --rc genhtml_branch_coverage=1 00:06:52.123 --rc genhtml_function_coverage=1 00:06:52.123 --rc genhtml_legend=1 00:06:52.123 --rc geninfo_all_blocks=1 00:06:52.123 --rc geninfo_unexecuted_blocks=1 00:06:52.123 00:06:52.123 ' 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.123 --rc genhtml_branch_coverage=1 00:06:52.123 --rc genhtml_function_coverage=1 00:06:52.123 --rc genhtml_legend=1 00:06:52.123 --rc geninfo_all_blocks=1 00:06:52.123 --rc geninfo_unexecuted_blocks=1 00:06:52.123 00:06:52.123 ' 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.123 --rc genhtml_branch_coverage=1 00:06:52.123 --rc genhtml_function_coverage=1 00:06:52.123 --rc genhtml_legend=1 00:06:52.123 --rc geninfo_all_blocks=1 00:06:52.123 --rc geninfo_unexecuted_blocks=1 00:06:52.123 00:06:52.123 ' 00:06:52.123 13:04:43 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.123 --rc genhtml_branch_coverage=1 00:06:52.123 --rc genhtml_function_coverage=1 00:06:52.123 --rc genhtml_legend=1 00:06:52.123 --rc geninfo_all_blocks=1 00:06:52.123 --rc geninfo_unexecuted_blocks=1 00:06:52.123 00:06:52.123 ' 00:06:52.123 13:04:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.124 13:04:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.124 13:04:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.124 13:04:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.124 ************************************ 00:06:52.124 START TEST thread_poller_perf 00:06:52.124 ************************************ 00:06:52.124 13:04:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.124 [2024-12-11 13:04:43.538012] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:52.124 [2024-12-11 13:04:43.538578] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61911 ] 00:06:52.382 [2024-12-11 13:04:43.728973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.382 [2024-12-11 13:04:43.884421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.382 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.757 [2024-12-11T13:04:45.325Z] ====================================== 00:06:53.757 [2024-12-11T13:04:45.325Z] busy:2501344610 (cyc) 00:06:53.757 [2024-12-11T13:04:45.325Z] total_run_count: 387000 00:06:53.757 [2024-12-11T13:04:45.325Z] tsc_hz: 2490000000 (cyc) 00:06:53.757 [2024-12-11T13:04:45.325Z] ====================================== 00:06:53.757 [2024-12-11T13:04:45.325Z] poller_cost: 6463 (cyc), 2595 (nsec) 00:06:53.757 00:06:53.757 real 0m1.670s 00:06:53.757 user 0m1.415s 00:06:53.757 sys 0m0.145s 00:06:53.757 ************************************ 00:06:53.757 END TEST thread_poller_perf 00:06:53.757 ************************************ 00:06:53.757 13:04:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.757 13:04:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.758 13:04:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.758 13:04:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.758 13:04:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.758 13:04:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.758 ************************************ 00:06:53.758 START TEST thread_poller_perf 00:06:53.758 ************************************ 00:06:53.758 13:04:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.758 [2024-12-11 13:04:45.287162] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:53.758 [2024-12-11 13:04:45.287295] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:06:54.015 [2024-12-11 13:04:45.473361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.273 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.273 [2024-12-11 13:04:45.632622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.651 [2024-12-11T13:04:47.219Z] ====================================== 00:06:55.651 [2024-12-11T13:04:47.219Z] busy:2494227818 (cyc) 00:06:55.651 [2024-12-11T13:04:47.219Z] total_run_count: 4713000 00:06:55.651 [2024-12-11T13:04:47.219Z] tsc_hz: 2490000000 (cyc) 00:06:55.651 [2024-12-11T13:04:47.219Z] ====================================== 00:06:55.651 [2024-12-11T13:04:47.219Z] poller_cost: 529 (cyc), 212 (nsec) 00:06:55.651 ************************************ 00:06:55.651 END TEST thread_poller_perf 00:06:55.651 ************************************ 00:06:55.651 00:06:55.651 real 0m1.662s 00:06:55.651 user 0m1.410s 00:06:55.651 sys 0m0.144s 00:06:55.651 13:04:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.651 13:04:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.651 13:04:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.651 ************************************ 00:06:55.651 END TEST thread 00:06:55.651 ************************************ 00:06:55.651 00:06:55.651 real 0m3.729s 00:06:55.651 user 0m2.991s 00:06:55.651 sys 0m0.522s 00:06:55.651 13:04:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.651 13:04:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.651 13:04:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.651 13:04:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.651 13:04:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.651 13:04:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.651 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:55.651 ************************************ 00:06:55.651 START TEST app_cmdline 00:06:55.651 ************************************ 00:06:55.651 13:04:47 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.651 * Looking for test storage... 00:06:55.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.651 13:04:47 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.651 13:04:47 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.651 13:04:47 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.910 13:04:47 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.910 13:04:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.910 13:04:47 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.910 13:04:47 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.910 --rc genhtml_branch_coverage=1 00:06:55.910 --rc genhtml_function_coverage=1 00:06:55.910 --rc genhtml_legend=1 00:06:55.910 --rc geninfo_all_blocks=1 00:06:55.910 --rc geninfo_unexecuted_blocks=1 00:06:55.910 00:06:55.910 ' 00:06:55.910 13:04:47 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.910 --rc genhtml_branch_coverage=1 00:06:55.910 --rc genhtml_function_coverage=1 00:06:55.910 --rc genhtml_legend=1 00:06:55.910 --rc geninfo_all_blocks=1 00:06:55.910 --rc geninfo_unexecuted_blocks=1 00:06:55.910 00:06:55.910 ' 00:06:55.910 13:04:47 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.910 --rc genhtml_branch_coverage=1 00:06:55.910 --rc genhtml_function_coverage=1 00:06:55.910 --rc genhtml_legend=1 00:06:55.910 --rc geninfo_all_blocks=1 00:06:55.910 --rc geninfo_unexecuted_blocks=1 00:06:55.910 00:06:55.910 ' 00:06:55.910 13:04:47 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.911 --rc genhtml_branch_coverage=1 00:06:55.911 --rc genhtml_function_coverage=1 00:06:55.911 --rc genhtml_legend=1 00:06:55.911 --rc geninfo_all_blocks=1 00:06:55.911 --rc geninfo_unexecuted_blocks=1 00:06:55.911 00:06:55.911 ' 00:06:55.911 13:04:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.911 13:04:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62037 00:06:55.911 13:04:47 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.911 13:04:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62037 00:06:55.911 13:04:47 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 62037 ']' 00:06:55.911 13:04:47 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.911 13:04:47 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.911 13:04:47 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.911 13:04:47 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.911 13:04:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.911 [2024-12-11 13:04:47.390669] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:06:55.911 [2024-12-11 13:04:47.390987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62037 ] 00:06:56.170 [2024-12-11 13:04:47.572505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.170 [2024-12-11 13:04:47.723382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:57.548 { 00:06:57.548 "version": "SPDK v25.01-pre git sha1 bcaf208e3", 00:06:57.548 "fields": { 00:06:57.548 "major": 25, 00:06:57.548 "minor": 1, 00:06:57.548 "patch": 0, 00:06:57.548 "suffix": "-pre", 00:06:57.548 "commit": "bcaf208e3" 00:06:57.548 } 00:06:57.548 } 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.548 13:04:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.548 13:04:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.548 13:04:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.548 13:04:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.548 13:04:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.548 13:04:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.548 13:04:49 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.807 request: 00:06:57.807 { 00:06:57.807 "method": "env_dpdk_get_mem_stats", 00:06:57.807 "req_id": 1 00:06:57.807 } 00:06:57.807 Got JSON-RPC error response 00:06:57.807 response: 00:06:57.807 { 00:06:57.807 "code": -32601, 00:06:57.807 "message": "Method not found" 00:06:57.807 } 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.807 13:04:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62037 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 62037 ']' 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 62037 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62037 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.807 killing process with pid 62037 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62037' 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 62037 00:06:57.807 13:04:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 62037 00:07:01.095 00:07:01.095 real 0m4.923s 00:07:01.095 user 0m4.885s 00:07:01.095 sys 0m0.845s 00:07:01.095 13:04:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.095 13:04:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.095 ************************************ 00:07:01.095 END TEST app_cmdline 00:07:01.095 ************************************ 00:07:01.095 13:04:52 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:01.095 13:04:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.095 13:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.095 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:07:01.095 ************************************ 00:07:01.095 START TEST version 00:07:01.095 ************************************ 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:01.095 * Looking for test storage... 00:07:01.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.095 13:04:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.095 13:04:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.095 13:04:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.095 13:04:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.095 13:04:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.095 13:04:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.095 13:04:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.095 13:04:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.095 13:04:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.095 13:04:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.095 13:04:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.095 13:04:52 version -- scripts/common.sh@344 -- # case "$op" in 00:07:01.095 13:04:52 version -- scripts/common.sh@345 -- # : 1 00:07:01.095 13:04:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.095 13:04:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.095 13:04:52 version -- scripts/common.sh@365 -- # decimal 1 00:07:01.095 13:04:52 version -- scripts/common.sh@353 -- # local d=1 00:07:01.095 13:04:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.095 13:04:52 version -- scripts/common.sh@355 -- # echo 1 00:07:01.095 13:04:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.095 13:04:52 version -- scripts/common.sh@366 -- # decimal 2 00:07:01.095 13:04:52 version -- scripts/common.sh@353 -- # local d=2 00:07:01.095 13:04:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.095 13:04:52 version -- scripts/common.sh@355 -- # echo 2 00:07:01.095 13:04:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.095 13:04:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.095 13:04:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.095 13:04:52 version -- scripts/common.sh@368 -- # return 0 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.095 --rc genhtml_branch_coverage=1 00:07:01.095 --rc genhtml_function_coverage=1 00:07:01.095 --rc genhtml_legend=1 00:07:01.095 --rc geninfo_all_blocks=1 00:07:01.095 --rc geninfo_unexecuted_blocks=1 00:07:01.095 00:07:01.095 ' 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.095 --rc genhtml_branch_coverage=1 00:07:01.095 --rc genhtml_function_coverage=1 00:07:01.095 --rc genhtml_legend=1 00:07:01.095 --rc geninfo_all_blocks=1 00:07:01.095 --rc geninfo_unexecuted_blocks=1 00:07:01.095 00:07:01.095 ' 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.095 --rc genhtml_branch_coverage=1 00:07:01.095 --rc genhtml_function_coverage=1 00:07:01.095 --rc genhtml_legend=1 00:07:01.095 --rc geninfo_all_blocks=1 00:07:01.095 --rc geninfo_unexecuted_blocks=1 00:07:01.095 00:07:01.095 ' 00:07:01.095 13:04:52 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.095 --rc genhtml_branch_coverage=1 00:07:01.095 --rc genhtml_function_coverage=1 00:07:01.095 --rc genhtml_legend=1 00:07:01.095 --rc geninfo_all_blocks=1 00:07:01.095 --rc geninfo_unexecuted_blocks=1 00:07:01.096 00:07:01.096 ' 00:07:01.096 13:04:52 version -- app/version.sh@17 -- # get_header_version major 00:07:01.096 13:04:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # cut -f2 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.096 13:04:52 version -- app/version.sh@17 -- # major=25 00:07:01.096 13:04:52 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.096 13:04:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # cut -f2 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.096 13:04:52 version -- app/version.sh@18 -- # minor=1 00:07:01.096 13:04:52 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.096 13:04:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # cut -f2 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.096 13:04:52 version -- app/version.sh@19 -- # patch=0 00:07:01.096 13:04:52 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.096 13:04:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # cut -f2 00:07:01.096 13:04:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.096 13:04:52 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.096 13:04:52 version -- app/version.sh@22 -- # version=25.1 00:07:01.096 13:04:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.096 13:04:52 version -- app/version.sh@28 -- # version=25.1rc0 00:07:01.096 13:04:52 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:01.096 13:04:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.096 13:04:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:01.096 13:04:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:01.096 00:07:01.096 real 0m0.329s 00:07:01.096 user 0m0.193s 00:07:01.096 sys 0m0.202s 00:07:01.096 13:04:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.096 13:04:52 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.096 ************************************ 00:07:01.096 END TEST version 00:07:01.096 ************************************ 00:07:01.096 13:04:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.096 13:04:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:01.096 13:04:52 -- spdk/autotest.sh@194 -- # uname -s 00:07:01.096 13:04:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:01.096 13:04:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.096 13:04:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.096 13:04:52 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:01.096 13:04:52 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:01.096 13:04:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.096 13:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.096 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:07:01.096 ************************************ 00:07:01.096 START TEST blockdev_nvme 00:07:01.096 ************************************ 00:07:01.096 13:04:52 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:01.096 * Looking for test storage... 00:07:01.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:01.096 13:04:52 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.096 13:04:52 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.096 13:04:52 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.355 13:04:52 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.355 13:04:52 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:01.355 13:04:52 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.355 13:04:52 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.355 --rc genhtml_branch_coverage=1 00:07:01.355 --rc genhtml_function_coverage=1 00:07:01.355 --rc genhtml_legend=1 00:07:01.355 --rc geninfo_all_blocks=1 00:07:01.355 --rc geninfo_unexecuted_blocks=1 00:07:01.355 00:07:01.355 ' 00:07:01.355 13:04:52 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.355 --rc genhtml_branch_coverage=1 00:07:01.355 --rc genhtml_function_coverage=1 00:07:01.355 --rc genhtml_legend=1 00:07:01.355 --rc geninfo_all_blocks=1 00:07:01.355 --rc geninfo_unexecuted_blocks=1 00:07:01.355 00:07:01.355 ' 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.356 --rc genhtml_branch_coverage=1 00:07:01.356 --rc genhtml_function_coverage=1 00:07:01.356 --rc genhtml_legend=1 00:07:01.356 --rc geninfo_all_blocks=1 00:07:01.356 --rc geninfo_unexecuted_blocks=1 00:07:01.356 00:07:01.356 ' 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.356 --rc genhtml_branch_coverage=1 00:07:01.356 --rc genhtml_function_coverage=1 00:07:01.356 --rc genhtml_legend=1 00:07:01.356 --rc geninfo_all_blocks=1 00:07:01.356 --rc geninfo_unexecuted_blocks=1 00:07:01.356 00:07:01.356 ' 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:01.356 13:04:52 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62232 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:01.356 13:04:52 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 62232 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 62232 ']' 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.356 13:04:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.356 [2024-12-11 13:04:52.820033] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:01.356 [2024-12-11 13:04:52.820185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:07:01.615 [2024-12-11 13:04:53.003481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.616 [2024-12-11 13:04:53.145208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.994 13:04:54 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.994 13:04:54 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:02.994 13:04:54 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:02.994 13:04:54 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:02.994 13:04:54 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:02.994 13:04:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:02.994 13:04:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:02.994 13:04:54 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:02.994 13:04:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.994 13:04:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:03.254 13:04:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:03.254 13:04:54 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:03.255 13:04:54 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9eaca7c0-50e1-4726-a2a4-aa2ee5d58d34"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9eaca7c0-50e1-4726-a2a4-aa2ee5d58d34",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "54debc7d-5b0d-451b-8f46-662835452d48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "54debc7d-5b0d-451b-8f46-662835452d48",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a711c4ca-b27f-4e1b-b7c0-9e449058cda6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a711c4ca-b27f-4e1b-b7c0-9e449058cda6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e3851114-22e8-41c4-9cbe-94811c465dc0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3851114-22e8-41c4-9cbe-94811c465dc0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cd54a151-ae96-4027-86e4-6d791b9c807c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd54a151-ae96-4027-86e4-6d791b9c807c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8d821ab0-ce16-40e6-87cd-d61e940e9a17"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8d821ab0-ce16-40e6-87cd-d61e940e9a17",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:03.514 13:04:54 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:03.514 13:04:54 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:03.514 13:04:54 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:03.514 13:04:54 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 62232 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 62232 ']' 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 62232 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62232 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.514 killing process with pid 62232 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62232' 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 62232 00:07:03.514 13:04:54 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 62232 00:07:06.057 13:04:57 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:06.057 13:04:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:06.057 13:04:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:06.057 13:04:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.057 13:04:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.057 ************************************ 00:07:06.057 START TEST bdev_hello_world 00:07:06.057 ************************************ 00:07:06.057 13:04:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:06.317 [2024-12-11 13:04:57.670332] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:06.317 [2024-12-11 13:04:57.670456] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62336 ] 00:07:06.317 [2024-12-11 13:04:57.850718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.576 [2024-12-11 13:04:57.997604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.513 [2024-12-11 13:04:58.738171] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:07.513 [2024-12-11 13:04:58.738259] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:07.513 [2024-12-11 13:04:58.738291] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:07.513 [2024-12-11 13:04:58.741633] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:07.513 [2024-12-11 13:04:58.742318] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:07.513 [2024-12-11 13:04:58.742357] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:07.513 [2024-12-11 13:04:58.742594] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:07.513 00:07:07.513 [2024-12-11 13:04:58.742627] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:08.450 00:07:08.450 real 0m2.433s 00:07:08.450 user 0m1.988s 00:07:08.450 sys 0m0.336s 00:07:08.450 13:05:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.450 ************************************ 00:07:08.450 END TEST bdev_hello_world 00:07:08.450 ************************************ 00:07:08.450 13:05:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:08.709 13:05:00 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:08.709 13:05:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.709 13:05:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.709 13:05:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.709 ************************************ 00:07:08.709 START TEST bdev_bounds 00:07:08.709 ************************************ 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62379 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.709 Process bdevio pid: 62379 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62379' 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62379 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62379 ']' 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.709 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.710 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.710 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.710 13:05:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:08.710 [2024-12-11 13:05:00.182809] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:08.710 [2024-12-11 13:05:00.182936] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62379 ] 00:07:08.969 [2024-12-11 13:05:00.366410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.969 [2024-12-11 13:05:00.514432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.969 [2024-12-11 13:05:00.514616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.969 [2024-12-11 13:05:00.514661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.906 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.906 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:09.906 13:05:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:09.906 I/O targets: 00:07:09.906 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:09.906 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:09.906 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.906 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.906 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.906 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:09.906 00:07:09.906 00:07:09.906 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.906 http://cunit.sourceforge.net/ 00:07:09.906 00:07:09.906 00:07:09.906 Suite: bdevio tests on: Nvme3n1 00:07:09.906 Test: blockdev write read block ...passed 00:07:09.906 Test: blockdev write zeroes read block ...passed 00:07:09.906 Test: blockdev write zeroes read no split ...passed 00:07:09.906 Test: blockdev write zeroes read split ...passed 00:07:09.906 Test: blockdev write zeroes read split partial ...passed 00:07:09.906 Test: blockdev reset ...[2024-12-11 13:05:01.446074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:09.906 [2024-12-11 13:05:01.450403] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:09.906 passed 00:07:09.906 Test: blockdev write read 8 blocks ...passed 00:07:09.906 Test: blockdev write read size > 128k ...passed 00:07:09.906 Test: blockdev write read invalid size ...passed 00:07:09.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.906 Test: blockdev write read max offset ...passed 00:07:09.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.906 Test: blockdev writev readv 8 blocks ...passed 00:07:09.906 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.906 Test: blockdev writev readv block ...passed 00:07:09.906 Test: blockdev writev readv size > 128k ...passed 00:07:09.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.906 Test: blockdev comparev and writev ...[2024-12-11 13:05:01.459516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9c0a000 len:0x1000 00:07:09.906 [2024-12-11 13:05:01.459594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.906 passed 00:07:09.906 Test: blockdev nvme passthru rw ...passed 00:07:09.906 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:01.460511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.906 [2024-12-11 13:05:01.460553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.906 passed 00:07:09.906 Test: blockdev nvme admin passthru ...passed 00:07:09.906 Test: blockdev copy ...passed 00:07:09.906 Suite: bdevio tests on: Nvme2n3 00:07:09.906 Test: blockdev write read block ...passed 00:07:09.906 Test: blockdev write zeroes read block ...passed 00:07:10.165 Test: blockdev write zeroes read no split ...passed 00:07:10.165 Test: blockdev write zeroes read split ...passed 00:07:10.165 Test: blockdev write zeroes read split partial ...passed 00:07:10.165 Test: blockdev reset ...[2024-12-11 13:05:01.538783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.165 [2024-12-11 13:05:01.543543] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.165 passed 00:07:10.165 Test: blockdev write read 8 blocks ...passed 00:07:10.165 Test: blockdev write read size > 128k ...passed 00:07:10.165 Test: blockdev write read invalid size ...passed 00:07:10.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.165 Test: blockdev write read max offset ...passed 00:07:10.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.165 Test: blockdev writev readv 8 blocks ...passed 00:07:10.165 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.165 Test: blockdev writev readv block ...passed 00:07:10.165 Test: blockdev writev readv size > 128k ...passed 00:07:10.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.165 Test: blockdev comparev and writev ...[2024-12-11 13:05:01.552597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29ce06000 len:0x1000 00:07:10.165 [2024-12-11 13:05:01.552667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.165 passed 00:07:10.165 Test: blockdev nvme passthru rw ...passed 00:07:10.165 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:01.553647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.165 [2024-12-11 13:05:01.553684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.165 passed 00:07:10.165 Test: blockdev nvme admin passthru ...passed 00:07:10.165 Test: blockdev copy ...passed 00:07:10.165 Suite: bdevio tests on: Nvme2n2 00:07:10.165 Test: blockdev write read block ...passed 00:07:10.165 Test: blockdev write zeroes read block ...passed 00:07:10.165 Test: blockdev write zeroes read no split ...passed 00:07:10.165 Test: blockdev write zeroes read split ...passed 00:07:10.165 Test: blockdev write zeroes read split partial ...passed 00:07:10.165 Test: blockdev reset ...[2024-12-11 13:05:01.632474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.165 [2024-12-11 13:05:01.637200] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.165 passed 00:07:10.165 Test: blockdev write read 8 blocks ...passed 00:07:10.165 Test: blockdev write read size > 128k ...passed 00:07:10.165 Test: blockdev write read invalid size ...passed 00:07:10.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.165 Test: blockdev write read max offset ...passed 00:07:10.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.165 Test: blockdev writev readv 8 blocks ...passed 00:07:10.165 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.165 Test: blockdev writev readv block ...passed 00:07:10.165 Test: blockdev writev readv size > 128k ...passed 00:07:10.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.166 Test: blockdev comparev and writev ...[2024-12-11 13:05:01.646381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c3c000 len:0x1000 00:07:10.166 [2024-12-11 13:05:01.646460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.166 passed 00:07:10.166 Test: blockdev nvme passthru rw ...passed 00:07:10.166 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:01.647389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.166 [2024-12-11 13:05:01.647424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.166 passed 00:07:10.166 Test: blockdev nvme admin passthru ...passed 00:07:10.166 Test: blockdev copy ...passed 00:07:10.166 Suite: bdevio tests on: Nvme2n1 00:07:10.166 Test: blockdev write read block ...passed 00:07:10.166 Test: blockdev write zeroes read block ...passed 00:07:10.166 Test: blockdev write zeroes read no split ...passed 00:07:10.166 Test: blockdev write zeroes read split ...passed 00:07:10.166 Test: blockdev write zeroes read split partial ...passed 00:07:10.166 Test: blockdev reset ...[2024-12-11 13:05:01.729280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.443 [2024-12-11 13:05:01.734011] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.443 passed 00:07:10.443 Test: blockdev write read 8 blocks ...passed 00:07:10.443 Test: blockdev write read size > 128k ...passed 00:07:10.443 Test: blockdev write read invalid size ...passed 00:07:10.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.443 Test: blockdev write read max offset ...passed 00:07:10.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.443 Test: blockdev writev readv 8 blocks ...passed 00:07:10.443 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.443 Test: blockdev writev readv block ...passed 00:07:10.443 Test: blockdev writev readv size > 128k ...passed 00:07:10.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.443 Test: blockdev comparev and writev ...[2024-12-11 13:05:01.742917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c38000 len:0x1000 00:07:10.443 [2024-12-11 13:05:01.742991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.443 passed 00:07:10.443 Test: blockdev nvme passthru rw ...passed 00:07:10.443 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:01.743928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.443 [2024-12-11 13:05:01.743962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.443 passed 00:07:10.443 Test: blockdev nvme admin passthru ...passed 00:07:10.443 Test: blockdev copy ...passed 00:07:10.443 Suite: bdevio tests on: Nvme1n1 00:07:10.443 Test: blockdev write read block ...passed 00:07:10.443 Test: blockdev write zeroes read block ...passed 00:07:10.443 Test: blockdev write zeroes read no split ...passed 00:07:10.443 Test: blockdev write zeroes read split ...passed 00:07:10.443 Test: blockdev write zeroes read split partial ...passed 00:07:10.443 Test: blockdev reset ...[2024-12-11 13:05:01.823966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:10.443 [2024-12-11 13:05:01.828311] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:10.443 passed 00:07:10.443 Test: blockdev write read 8 blocks ...passed 00:07:10.443 Test: blockdev write read size > 128k ...passed 00:07:10.443 Test: blockdev write read invalid size ...passed 00:07:10.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.443 Test: blockdev write read max offset ...passed 00:07:10.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.443 Test: blockdev writev readv 8 blocks ...passed 00:07:10.443 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.443 Test: blockdev writev readv block ...passed 00:07:10.443 Test: blockdev writev readv size > 128k ...passed 00:07:10.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.443 Test: blockdev comparev and writev ...[2024-12-11 13:05:01.837644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c34000 len:0x1000 00:07:10.443 [2024-12-11 13:05:01.837730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.443 passed 00:07:10.443 Test: blockdev nvme passthru rw ...passed 00:07:10.443 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:01.838725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.443 [2024-12-11 13:05:01.838768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.443 passed 00:07:10.443 Test: blockdev nvme admin passthru ...passed 00:07:10.443 Test: blockdev copy ...passed 00:07:10.443 Suite: bdevio tests on: Nvme0n1 00:07:10.443 Test: blockdev write read block ...passed 00:07:10.443 Test: blockdev write zeroes read block ...passed 00:07:10.443 Test: blockdev write zeroes read no split ...passed 00:07:10.443 Test: blockdev write zeroes read split ...passed 00:07:10.443 Test: blockdev write zeroes read split partial ...passed 00:07:10.443 Test: blockdev reset ...[2024-12-11 13:05:01.924964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:10.443 [2024-12-11 13:05:01.929310] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:10.443 passed 00:07:10.443 Test: blockdev write read 8 blocks ...passed 00:07:10.443 Test: blockdev write read size > 128k ...passed 00:07:10.443 Test: blockdev write read invalid size ...passed 00:07:10.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.443 Test: blockdev write read max offset ...passed 00:07:10.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.443 Test: blockdev writev readv 8 blocks ...passed 00:07:10.443 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.443 Test: blockdev writev readv block ...passed 00:07:10.443 Test: blockdev writev readv size > 128k ...passed 00:07:10.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.444 Test: blockdev comparev and writev ...passed 00:07:10.444 Test: blockdev nvme passthru rw ...[2024-12-11 13:05:01.937296] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:10.444 separate metadata which is not supported yet. 00:07:10.444 passed 00:07:10.444 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:01.938016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:10.444 passed 00:07:10.444 Test: blockdev nvme admin passthru ...[2024-12-11 13:05:01.938094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:10.444 passed 00:07:10.444 Test: blockdev copy ...passed 00:07:10.444 00:07:10.444 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.444 suites 6 6 n/a 0 0 00:07:10.444 tests 138 138 138 0 0 00:07:10.444 asserts 893 893 893 0 n/a 00:07:10.444 00:07:10.444 Elapsed time = 1.542 seconds 00:07:10.444 0 00:07:10.444 13:05:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62379 00:07:10.444 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62379 ']' 00:07:10.444 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62379 00:07:10.444 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:10.444 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.444 13:05:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62379 00:07:10.715 13:05:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.715 13:05:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.715 killing process with pid 62379 00:07:10.715 13:05:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62379' 00:07:10.715 13:05:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62379 00:07:10.715 13:05:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62379 00:07:11.653 13:05:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:11.653 00:07:11.653 real 0m3.110s 00:07:11.653 user 0m7.814s 00:07:11.653 sys 0m0.502s 00:07:11.653 13:05:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.653 ************************************ 00:07:11.653 END TEST bdev_bounds 00:07:11.653 ************************************ 00:07:11.653 13:05:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:11.912 13:05:03 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.912 13:05:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.912 13:05:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.912 13:05:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.912 ************************************ 00:07:11.912 START TEST bdev_nbd 00:07:11.912 ************************************ 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62444 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62444 /var/tmp/spdk-nbd.sock 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62444 ']' 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.912 13:05:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:11.912 [2024-12-11 13:05:03.382600] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:11.912 [2024-12-11 13:05:03.382734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.172 [2024-12-11 13:05:03.566917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.172 [2024-12-11 13:05:03.721869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.109 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.369 1+0 records in 00:07:13.369 1+0 records out 00:07:13.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556744 s, 7.4 MB/s 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.369 13:05:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.633 1+0 records in 00:07:13.633 1+0 records out 00:07:13.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555407 s, 7.4 MB/s 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.633 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.895 1+0 records in 00:07:13.895 1+0 records out 00:07:13.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706484 s, 5.8 MB/s 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.895 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.155 1+0 records in 00:07:14.155 1+0 records out 00:07:14.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000764661 s, 5.4 MB/s 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.155 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.414 1+0 records in 00:07:14.414 1+0 records out 00:07:14.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606042 s, 6.8 MB/s 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.414 13:05:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:14.673 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.674 1+0 records in 00:07:14.674 1+0 records out 00:07:14.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887569 s, 4.6 MB/s 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.674 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd0", 00:07:14.933 "bdev_name": "Nvme0n1" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd1", 00:07:14.933 "bdev_name": "Nvme1n1" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd2", 00:07:14.933 "bdev_name": "Nvme2n1" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd3", 00:07:14.933 "bdev_name": "Nvme2n2" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd4", 00:07:14.933 "bdev_name": "Nvme2n3" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd5", 00:07:14.933 "bdev_name": "Nvme3n1" 00:07:14.933 } 00:07:14.933 ]' 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd0", 00:07:14.933 "bdev_name": "Nvme0n1" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd1", 00:07:14.933 "bdev_name": "Nvme1n1" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd2", 00:07:14.933 "bdev_name": "Nvme2n1" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd3", 00:07:14.933 "bdev_name": "Nvme2n2" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd4", 00:07:14.933 "bdev_name": "Nvme2n3" 00:07:14.933 }, 00:07:14.933 { 00:07:14.933 "nbd_device": "/dev/nbd5", 00:07:14.933 "bdev_name": "Nvme3n1" 00:07:14.933 } 00:07:14.933 ]' 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.933 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.192 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.451 13:05:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.711 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:15.970 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.228 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.487 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.487 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.487 13:05:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.487 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.487 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.487 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.487 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:16.487 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.487 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:16.746 /dev/nbd0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.746 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.005 1+0 records in 00:07:17.005 1+0 records out 00:07:17.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728163 s, 5.6 MB/s 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:17.005 /dev/nbd1 00:07:17.005 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.264 1+0 records in 00:07:17.264 1+0 records out 00:07:17.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797756 s, 5.1 MB/s 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.264 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.265 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.265 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.265 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:17.523 /dev/nbd10 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.523 1+0 records in 00:07:17.523 1+0 records out 00:07:17.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832847 s, 4.9 MB/s 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.523 13:05:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:17.782 /dev/nbd11 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.782 1+0 records in 00:07:17.782 1+0 records out 00:07:17.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736554 s, 5.6 MB/s 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.782 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:18.043 /dev/nbd12 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.043 1+0 records in 00:07:18.043 1+0 records out 00:07:18.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867198 s, 4.7 MB/s 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.043 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:18.323 /dev/nbd13 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.323 1+0 records in 00:07:18.323 1+0 records out 00:07:18.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091564 s, 4.5 MB/s 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.323 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.582 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd0", 00:07:18.582 "bdev_name": "Nvme0n1" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd1", 00:07:18.582 "bdev_name": "Nvme1n1" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd10", 00:07:18.582 "bdev_name": "Nvme2n1" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd11", 00:07:18.582 "bdev_name": "Nvme2n2" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd12", 00:07:18.582 "bdev_name": "Nvme2n3" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd13", 00:07:18.582 "bdev_name": "Nvme3n1" 00:07:18.582 } 00:07:18.582 ]' 00:07:18.582 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd0", 00:07:18.582 "bdev_name": "Nvme0n1" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd1", 00:07:18.582 "bdev_name": "Nvme1n1" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd10", 00:07:18.582 "bdev_name": "Nvme2n1" 00:07:18.582 }, 00:07:18.582 { 00:07:18.582 "nbd_device": "/dev/nbd11", 00:07:18.582 "bdev_name": "Nvme2n2" 00:07:18.582 }, 00:07:18.583 { 00:07:18.583 "nbd_device": "/dev/nbd12", 00:07:18.583 "bdev_name": "Nvme2n3" 00:07:18.583 }, 00:07:18.583 { 00:07:18.583 "nbd_device": "/dev/nbd13", 00:07:18.583 "bdev_name": "Nvme3n1" 00:07:18.583 } 00:07:18.583 ]' 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.583 /dev/nbd1 00:07:18.583 /dev/nbd10 00:07:18.583 /dev/nbd11 00:07:18.583 /dev/nbd12 00:07:18.583 /dev/nbd13' 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.583 /dev/nbd1 00:07:18.583 /dev/nbd10 00:07:18.583 /dev/nbd11 00:07:18.583 /dev/nbd12 00:07:18.583 /dev/nbd13' 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:18.583 256+0 records in 00:07:18.583 256+0 records out 00:07:18.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112137 s, 93.5 MB/s 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.583 13:05:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.583 256+0 records in 00:07:18.583 256+0 records out 00:07:18.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130715 s, 8.0 MB/s 00:07:18.583 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.583 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.842 256+0 records in 00:07:18.842 256+0 records out 00:07:18.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140567 s, 7.5 MB/s 00:07:18.842 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.842 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:19.101 256+0 records in 00:07:19.101 256+0 records out 00:07:19.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133264 s, 7.9 MB/s 00:07:19.101 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.101 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:19.101 256+0 records in 00:07:19.101 256+0 records out 00:07:19.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12801 s, 8.2 MB/s 00:07:19.101 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.101 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:19.360 256+0 records in 00:07:19.360 256+0 records out 00:07:19.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128131 s, 8.2 MB/s 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:19.360 256+0 records in 00:07:19.360 256+0 records out 00:07:19.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13401 s, 7.8 MB/s 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.360 13:05:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.619 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.877 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.135 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.394 13:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.652 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.911 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.169 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.169 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:21.170 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:21.428 malloc_lvol_verify 00:07:21.428 13:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:21.687 fcc7ecbd-fa25-4d3a-b268-c0158e19ad62 00:07:21.687 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:21.687 5c4a38ea-1183-471e-bea2-06ec65bb8413 00:07:21.687 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:21.945 /dev/nbd0 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:21.945 mke2fs 1.47.0 (5-Feb-2023) 00:07:21.945 Discarding device blocks: 0/4096 done 00:07:21.945 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:21.945 00:07:21.945 Allocating group tables: 0/1 done 00:07:21.945 Writing inode tables: 0/1 done 00:07:21.945 Creating journal (1024 blocks): done 00:07:21.945 Writing superblocks and filesystem accounting information: 0/1 done 00:07:21.945 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.945 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62444 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62444 ']' 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62444 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62444 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.204 killing process with pid 62444 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62444' 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62444 00:07:22.204 13:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62444 00:07:23.580 13:05:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:23.580 00:07:23.580 real 0m11.805s 00:07:23.580 user 0m15.081s 00:07:23.580 sys 0m4.885s 00:07:23.580 13:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.580 13:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:23.580 ************************************ 00:07:23.580 END TEST bdev_nbd 00:07:23.580 ************************************ 00:07:23.839 13:05:15 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:23.839 13:05:15 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:23.839 skipping fio tests on NVMe due to multi-ns failures. 00:07:23.839 13:05:15 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:23.839 13:05:15 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:23.839 13:05:15 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.839 13:05:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:23.839 13:05:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.839 13:05:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.839 ************************************ 00:07:23.839 START TEST bdev_verify 00:07:23.839 ************************************ 00:07:23.839 13:05:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.839 [2024-12-11 13:05:15.255722] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:23.839 [2024-12-11 13:05:15.255842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62842 ] 00:07:24.098 [2024-12-11 13:05:15.440254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.098 [2024-12-11 13:05:15.584650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.098 [2024-12-11 13:05:15.584705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.034 Running I/O for 5 seconds... 00:07:26.939 18176.00 IOPS, 71.00 MiB/s [2024-12-11T13:05:19.885Z] 17664.00 IOPS, 69.00 MiB/s [2024-12-11T13:05:20.820Z] 17642.67 IOPS, 68.92 MiB/s [2024-12-11T13:05:21.756Z] 17728.00 IOPS, 69.25 MiB/s [2024-12-11T13:05:21.756Z] 17574.40 IOPS, 68.65 MiB/s 00:07:30.188 Latency(us) 00:07:30.188 [2024-12-11T13:05:21.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.188 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x0 length 0xbd0bd 00:07:30.188 Nvme0n1 : 5.05 1648.00 6.44 0.00 0.00 77514.88 17792.10 68641.72 00:07:30.188 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:30.188 Nvme0n1 : 5.08 1260.21 4.92 0.00 0.00 100628.50 11738.58 74116.22 00:07:30.188 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x0 length 0xa0000 00:07:30.188 Nvme1n1 : 5.05 1647.48 6.44 0.00 0.00 77435.41 17265.71 61903.88 00:07:30.188 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0xa0000 length 0xa0000 00:07:30.188 Nvme1n1 : 5.08 1259.93 4.92 0.00 0.00 100561.68 10317.31 72852.87 00:07:30.188 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x0 length 0x80000 00:07:30.188 Nvme2n1 : 5.05 1646.98 6.43 0.00 0.00 77341.10 16949.87 64009.46 00:07:30.188 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x80000 length 0x80000 00:07:30.188 Nvme2n1 : 5.08 1259.65 4.92 0.00 0.00 100486.52 10475.23 74958.44 00:07:30.188 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x0 length 0x80000 00:07:30.188 Nvme2n2 : 5.05 1646.44 6.43 0.00 0.00 77253.90 16002.36 65693.92 00:07:30.188 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x80000 length 0x80000 00:07:30.188 Nvme2n2 : 5.07 1261.09 4.93 0.00 0.00 101274.20 11843.86 77064.02 00:07:30.188 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x0 length 0x80000 00:07:30.188 Nvme2n3 : 5.05 1645.91 6.43 0.00 0.00 77173.99 15897.09 66957.26 00:07:30.188 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x80000 length 0x80000 00:07:30.188 Nvme2n3 : 5.08 1260.79 4.92 0.00 0.00 101023.46 11791.22 72431.76 00:07:30.188 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x0 length 0x20000 00:07:30.188 Nvme3n1 : 5.06 1645.49 6.43 0.00 0.00 77101.10 15265.41 68220.61 00:07:30.188 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.188 Verification LBA range: start 0x20000 length 0x20000 00:07:30.188 Nvme3n1 : 5.08 1260.50 4.92 0.00 0.00 100828.41 11896.49 75800.67 00:07:30.188 [2024-12-11T13:05:21.756Z] =================================================================================================================== 00:07:30.188 [2024-12-11T13:05:21.756Z] Total : 17442.46 68.13 0.00 0.00 87519.51 10317.31 77064.02 00:07:31.566 00:07:31.566 real 0m7.840s 00:07:31.566 user 0m14.359s 00:07:31.566 sys 0m0.392s 00:07:31.566 13:05:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.566 13:05:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:31.566 ************************************ 00:07:31.566 END TEST bdev_verify 00:07:31.566 ************************************ 00:07:31.566 13:05:23 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:31.566 13:05:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:31.566 13:05:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.566 13:05:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.566 ************************************ 00:07:31.566 START TEST bdev_verify_big_io 00:07:31.566 ************************************ 00:07:31.566 13:05:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:31.825 [2024-12-11 13:05:23.183183] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:31.825 [2024-12-11 13:05:23.183341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62945 ] 00:07:31.825 [2024-12-11 13:05:23.370307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.083 [2024-12-11 13:05:23.527111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.083 [2024-12-11 13:05:23.527182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.035 Running I/O for 5 seconds... 00:07:36.502 1275.00 IOPS, 79.69 MiB/s [2024-12-11T13:05:30.606Z] 1878.00 IOPS, 117.38 MiB/s [2024-12-11T13:05:30.606Z] 2913.00 IOPS, 182.06 MiB/s [2024-12-11T13:05:30.606Z] 2810.00 IOPS, 175.62 MiB/s 00:07:39.038 Latency(us) 00:07:39.038 [2024-12-11T13:05:30.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x0 length 0xbd0b 00:07:39.038 Nvme0n1 : 5.63 132.22 8.26 0.00 0.00 934207.80 25688.01 1131956.74 00:07:39.038 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:39.038 Nvme0n1 : 5.44 188.11 11.76 0.00 0.00 663000.98 22108.53 758006.75 00:07:39.038 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x0 length 0xa000 00:07:39.038 Nvme1n1 : 5.70 139.58 8.72 0.00 0.00 875967.51 40427.03 1152170.26 00:07:39.038 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0xa000 length 0xa000 00:07:39.038 Nvme1n1 : 5.45 188.01 11.75 0.00 0.00 646346.54 60640.54 633356.75 00:07:39.038 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x0 length 0x8000 00:07:39.038 Nvme2n1 : 5.70 139.38 8.71 0.00 0.00 854908.11 50744.34 1172383.77 00:07:39.038 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x8000 length 0x8000 00:07:39.038 Nvme2n1 : 5.58 194.96 12.19 0.00 0.00 613176.81 25266.89 643463.51 00:07:39.038 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x0 length 0x8000 00:07:39.038 Nvme2n2 : 5.69 143.00 8.94 0.00 0.00 811747.32 50112.67 875918.91 00:07:39.038 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x8000 length 0x8000 00:07:39.038 Nvme2n2 : 5.64 201.40 12.59 0.00 0.00 582246.16 20950.46 660308.10 00:07:39.038 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x0 length 0x8000 00:07:39.038 Nvme2n3 : 5.70 142.81 8.93 0.00 0.00 794401.43 11738.58 1246499.98 00:07:39.038 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x8000 length 0x8000 00:07:39.038 Nvme2n3 : 5.67 203.21 12.70 0.00 0.00 561445.45 32004.73 697366.21 00:07:39.038 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x0 length 0x2000 00:07:39.038 Nvme3n1 : 5.73 165.36 10.33 0.00 0.00 671802.09 4579.62 889394.58 00:07:39.038 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.038 Verification LBA range: start 0x2000 length 0x2000 00:07:39.038 Nvme3n1 : 5.69 221.70 13.86 0.00 0.00 506930.76 2618.81 700735.13 00:07:39.038 [2024-12-11T13:05:30.606Z] =================================================================================================================== 00:07:39.038 [2024-12-11T13:05:30.606Z] Total : 2059.75 128.73 0.00 0.00 687403.01 2618.81 1246499.98 00:07:40.947 00:07:40.947 real 0m9.068s 00:07:40.947 user 0m16.720s 00:07:40.947 sys 0m0.492s 00:07:40.947 13:05:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.947 13:05:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:40.947 ************************************ 00:07:40.947 END TEST bdev_verify_big_io 00:07:40.947 ************************************ 00:07:40.947 13:05:32 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.947 13:05:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.947 13:05:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.947 13:05:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.947 ************************************ 00:07:40.947 START TEST bdev_write_zeroes 00:07:40.947 ************************************ 00:07:40.947 13:05:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.947 [2024-12-11 13:05:32.323736] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:40.947 [2024-12-11 13:05:32.323871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63060 ] 00:07:40.947 [2024-12-11 13:05:32.504522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.215 [2024-12-11 13:05:32.627632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.782 Running I/O for 1 seconds... 00:07:43.156 77504.00 IOPS, 302.75 MiB/s 00:07:43.156 Latency(us) 00:07:43.156 [2024-12-11T13:05:34.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.156 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.156 Nvme0n1 : 1.02 12906.34 50.42 0.00 0.00 9895.49 8369.66 25056.33 00:07:43.156 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.156 Nvme1n1 : 1.02 12894.22 50.37 0.00 0.00 9894.29 8685.49 25898.56 00:07:43.156 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.156 Nvme2n1 : 1.02 12905.78 50.41 0.00 0.00 9846.21 6527.28 23582.43 00:07:43.156 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.156 Nvme2n2 : 1.02 12878.92 50.31 0.00 0.00 9856.09 8317.02 24214.10 00:07:43.156 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.156 Nvme2n3 : 1.02 12867.34 50.26 0.00 0.00 9840.91 8264.38 23582.43 00:07:43.156 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.156 Nvme3n1 : 1.02 12792.79 49.97 0.00 0.00 9879.34 7211.59 29688.60 00:07:43.156 [2024-12-11T13:05:34.724Z] =================================================================================================================== 00:07:43.156 [2024-12-11T13:05:34.724Z] Total : 77245.38 301.74 0.00 0.00 9868.70 6527.28 29688.60 00:07:44.101 00:07:44.101 real 0m3.305s 00:07:44.101 user 0m2.917s 00:07:44.101 sys 0m0.270s 00:07:44.101 13:05:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.101 ************************************ 00:07:44.101 END TEST bdev_write_zeroes 00:07:44.101 ************************************ 00:07:44.101 13:05:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:44.101 13:05:35 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.101 13:05:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.101 13:05:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.101 13:05:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.101 ************************************ 00:07:44.101 START TEST bdev_json_nonenclosed 00:07:44.101 ************************************ 00:07:44.101 13:05:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.360 [2024-12-11 13:05:35.697327] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:44.360 [2024-12-11 13:05:35.697448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63113 ] 00:07:44.360 [2024-12-11 13:05:35.877304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.620 [2024-12-11 13:05:35.997031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.620 [2024-12-11 13:05:35.997158] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:44.620 [2024-12-11 13:05:35.997183] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:44.620 [2024-12-11 13:05:35.997196] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.879 00:07:44.879 real 0m0.659s 00:07:44.879 user 0m0.404s 00:07:44.879 sys 0m0.150s 00:07:44.879 13:05:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.879 ************************************ 00:07:44.879 END TEST bdev_json_nonenclosed 00:07:44.879 ************************************ 00:07:44.879 13:05:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:44.879 13:05:36 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.879 13:05:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.879 13:05:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.879 13:05:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.879 ************************************ 00:07:44.879 START TEST bdev_json_nonarray 00:07:44.879 ************************************ 00:07:44.879 13:05:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.139 [2024-12-11 13:05:36.474863] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:45.139 [2024-12-11 13:05:36.475142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63144 ] 00:07:45.139 [2024-12-11 13:05:36.668978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.399 [2024-12-11 13:05:36.781857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.399 [2024-12-11 13:05:36.781960] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:45.399 [2024-12-11 13:05:36.781984] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.399 [2024-12-11 13:05:36.781996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.658 00:07:45.658 real 0m0.711s 00:07:45.658 user 0m0.427s 00:07:45.658 sys 0m0.179s 00:07:45.658 13:05:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.658 13:05:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:45.658 ************************************ 00:07:45.658 END TEST bdev_json_nonarray 00:07:45.658 ************************************ 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:45.658 13:05:37 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:45.658 00:07:45.658 real 0m44.667s 00:07:45.658 user 1m4.771s 00:07:45.658 sys 0m8.609s 00:07:45.658 13:05:37 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.658 13:05:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.658 ************************************ 00:07:45.658 END TEST blockdev_nvme 00:07:45.658 ************************************ 00:07:45.658 13:05:37 -- spdk/autotest.sh@209 -- # uname -s 00:07:45.658 13:05:37 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:45.658 13:05:37 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.658 13:05:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.658 13:05:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.658 13:05:37 -- common/autotest_common.sh@10 -- # set +x 00:07:45.658 ************************************ 00:07:45.658 START TEST blockdev_nvme_gpt 00:07:45.658 ************************************ 00:07:45.658 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.918 * Looking for test storage... 00:07:45.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.918 13:05:37 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.918 --rc genhtml_branch_coverage=1 00:07:45.918 --rc genhtml_function_coverage=1 00:07:45.918 --rc genhtml_legend=1 00:07:45.918 --rc geninfo_all_blocks=1 00:07:45.918 --rc geninfo_unexecuted_blocks=1 00:07:45.918 00:07:45.918 ' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.918 --rc genhtml_branch_coverage=1 00:07:45.918 --rc genhtml_function_coverage=1 00:07:45.918 --rc genhtml_legend=1 00:07:45.918 --rc geninfo_all_blocks=1 00:07:45.918 --rc geninfo_unexecuted_blocks=1 00:07:45.918 00:07:45.918 ' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.918 --rc genhtml_branch_coverage=1 00:07:45.918 --rc genhtml_function_coverage=1 00:07:45.918 --rc genhtml_legend=1 00:07:45.918 --rc geninfo_all_blocks=1 00:07:45.918 --rc geninfo_unexecuted_blocks=1 00:07:45.918 00:07:45.918 ' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.918 --rc genhtml_branch_coverage=1 00:07:45.918 --rc genhtml_function_coverage=1 00:07:45.918 --rc genhtml_legend=1 00:07:45.918 --rc geninfo_all_blocks=1 00:07:45.918 --rc geninfo_unexecuted_blocks=1 00:07:45.918 00:07:45.918 ' 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:45.918 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63228 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:45.919 13:05:37 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 63228 00:07:45.919 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 63228 ']' 00:07:45.919 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.919 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.919 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.919 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.919 13:05:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.178 [2024-12-11 13:05:37.549853] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:07:46.178 [2024-12-11 13:05:37.550174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63228 ] 00:07:46.178 [2024-12-11 13:05:37.732095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.437 [2024-12-11 13:05:37.854376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.375 13:05:38 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.375 13:05:38 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:47.375 13:05:38 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:47.375 13:05:38 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:47.375 13:05:38 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:47.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.202 Waiting for block devices as requested 00:07:48.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.461 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.461 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.736 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:53.737 BYT; 00:07:53.737 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:53.737 BYT; 00:07:53.737 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.737 13:05:45 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.737 13:05:45 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:54.675 The operation has completed successfully. 00:07:54.675 13:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:56.052 The operation has completed successfully. 00:07:56.052 13:05:47 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:56.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.188 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.188 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.446 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.446 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.446 13:05:48 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:57.446 13:05:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.446 13:05:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.446 [] 00:07:57.446 13:05:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.446 13:05:48 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:57.446 13:05:48 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:57.446 13:05:48 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:57.446 13:05:48 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.705 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:57.705 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.705 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:57.964 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.964 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:58.224 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:58.225 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4d53147e-6127-4053-b9d8-a0ff5765a59f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d53147e-6127-4053-b9d8-a0ff5765a59f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e69fa9e9-92a6-4667-a6f9-4a64c5949430"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e69fa9e9-92a6-4667-a6f9-4a64c5949430",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "61818320-34ce-40f8-9dda-9858fdddafc5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61818320-34ce-40f8-9dda-9858fdddafc5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fc36304b-51f7-411d-8f9c-64df86dc23f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fc36304b-51f7-411d-8f9c-64df86dc23f3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0eb7e27e-ffee-413e-9496-d4a2462b99f2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0eb7e27e-ffee-413e-9496-d4a2462b99f2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:58.225 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:58.225 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:58.225 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:58.225 13:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 63228 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 63228 ']' 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 63228 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63228 00:07:58.225 killing process with pid 63228 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63228' 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 63228 00:07:58.225 13:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 63228 00:08:01.515 13:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:01.515 13:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:01.515 13:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:01.515 13:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.515 13:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.515 ************************************ 00:08:01.515 START TEST bdev_hello_world 00:08:01.515 ************************************ 00:08:01.515 13:05:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:01.515 [2024-12-11 13:05:52.462497] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:01.515 [2024-12-11 13:05:52.463066] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63881 ] 00:08:01.515 [2024-12-11 13:05:52.649448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.515 [2024-12-11 13:05:52.791082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.083 [2024-12-11 13:05:53.512414] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:02.083 [2024-12-11 13:05:53.512476] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:02.083 [2024-12-11 13:05:53.512518] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:02.083 [2024-12-11 13:05:53.515756] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:02.083 [2024-12-11 13:05:53.516293] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:02.083 [2024-12-11 13:05:53.516325] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:02.083 [2024-12-11 13:05:53.516569] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:02.083 00:08:02.083 [2024-12-11 13:05:53.516739] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:03.461 00:08:03.461 real 0m2.418s 00:08:03.461 user 0m1.948s 00:08:03.461 sys 0m0.357s 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:03.461 ************************************ 00:08:03.461 END TEST bdev_hello_world 00:08:03.461 ************************************ 00:08:03.461 13:05:54 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:03.461 13:05:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.461 13:05:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.461 13:05:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.461 ************************************ 00:08:03.461 START TEST bdev_bounds 00:08:03.461 ************************************ 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63929 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.461 Process bdevio pid: 63929 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63929' 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63929 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63929 ']' 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.461 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.462 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.462 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.462 13:05:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:03.462 [2024-12-11 13:05:54.951373] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:03.462 [2024-12-11 13:05:54.951542] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63929 ] 00:08:03.722 [2024-12-11 13:05:55.143225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.981 [2024-12-11 13:05:55.291071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.981 [2024-12-11 13:05:55.291202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.981 [2024-12-11 13:05:55.291229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.548 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.548 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:04.548 13:05:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:04.808 I/O targets: 00:08:04.808 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:04.808 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:04.808 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:04.808 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:04.808 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:04.808 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:04.808 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:04.808 00:08:04.808 00:08:04.808 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.808 http://cunit.sourceforge.net/ 00:08:04.808 00:08:04.808 00:08:04.808 Suite: bdevio tests on: Nvme3n1 00:08:04.808 Test: blockdev write read block ...passed 00:08:04.808 Test: blockdev write zeroes read block ...passed 00:08:04.808 Test: blockdev write zeroes read no split ...passed 00:08:04.808 Test: blockdev write zeroes read split ...passed 00:08:04.808 Test: blockdev write zeroes read split partial ...passed 00:08:04.808 Test: blockdev reset ...[2024-12-11 13:05:56.238658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:04.808 [2024-12-11 13:05:56.242779] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:04.808 passed 00:08:04.808 Test: blockdev write read 8 blocks ...passed 00:08:04.808 Test: blockdev write read size > 128k ...passed 00:08:04.808 Test: blockdev write read invalid size ...passed 00:08:04.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.808 Test: blockdev write read max offset ...passed 00:08:04.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.808 Test: blockdev writev readv 8 blocks ...passed 00:08:04.808 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.808 Test: blockdev writev readv block ...passed 00:08:04.808 Test: blockdev writev readv size > 128k ...passed 00:08:04.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.808 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.253632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:04.808 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b7404000 len:0x1000 00:08:04.808 [2024-12-11 13:05:56.253839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.808 passed 00:08:04.808 Test: blockdev nvme passthru vendor specific ...passed 00:08:04.808 Test: blockdev nvme admin passthru ...[2024-12-11 13:05:56.255011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.808 [2024-12-11 13:05:56.255068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.808 passed 00:08:04.808 Test: blockdev copy ...passed 00:08:04.808 Suite: bdevio tests on: Nvme2n3 00:08:04.808 Test: blockdev write read block ...passed 00:08:04.808 Test: blockdev write zeroes read block ...passed 00:08:04.808 Test: blockdev write zeroes read no split ...passed 00:08:04.808 Test: blockdev write zeroes read split ...passed 00:08:04.808 Test: blockdev write zeroes read split partial ...passed 00:08:04.808 Test: blockdev reset ...[2024-12-11 13:05:56.332643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:04.808 [2024-12-11 13:05:56.337202] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:04.808 passed 00:08:04.808 Test: blockdev write read 8 blocks ...passed 00:08:04.808 Test: blockdev write read size > 128k ...passed 00:08:04.808 Test: blockdev write read invalid size ...passed 00:08:04.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.808 Test: blockdev write read max offset ...passed 00:08:04.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.808 Test: blockdev writev readv 8 blocks ...passed 00:08:04.808 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.808 Test: blockdev writev readv block ...passed 00:08:04.808 Test: blockdev writev readv size > 128k ...passed 00:08:04.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.808 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.348047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7402000 len:0x1000 00:08:04.808 [2024-12-11 13:05:56.348256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.808 passed 00:08:04.808 Test: blockdev nvme passthru rw ...passed 00:08:04.808 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:05:56.349866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.808 [2024-12-11 13:05:56.350049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.808 passed 00:08:04.808 Test: blockdev nvme admin passthru ...passed 00:08:04.808 Test: blockdev copy ...passed 00:08:04.808 Suite: bdevio tests on: Nvme2n2 00:08:04.808 Test: blockdev write read block ...passed 00:08:04.808 Test: blockdev write zeroes read block ...passed 00:08:04.808 Test: blockdev write zeroes read no split ...passed 00:08:05.068 Test: blockdev write zeroes read split ...passed 00:08:05.068 Test: blockdev write zeroes read split partial ...passed 00:08:05.068 Test: blockdev reset ...[2024-12-11 13:05:56.425364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:05.068 [2024-12-11 13:05:56.429895] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:05.068 passed 00:08:05.068 Test: blockdev write read 8 blocks ...passed 00:08:05.068 Test: blockdev write read size > 128k ...passed 00:08:05.068 Test: blockdev write read invalid size ...passed 00:08:05.068 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.068 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.068 Test: blockdev write read max offset ...passed 00:08:05.068 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.068 Test: blockdev writev readv 8 blocks ...passed 00:08:05.068 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.068 Test: blockdev writev readv block ...passed 00:08:05.068 Test: blockdev writev readv size > 128k ...passed 00:08:05.068 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.068 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.439441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba38000 len:0x1000 00:08:05.069 [2024-12-11 13:05:56.439489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.069 passed 00:08:05.069 Test: blockdev nvme passthru rw ...passed 00:08:05.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.069 Test: blockdev nvme admin passthru ...[2024-12-11 13:05:56.440571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.069 [2024-12-11 13:05:56.440609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.069 passed 00:08:05.069 Test: blockdev copy ...passed 00:08:05.069 Suite: bdevio tests on: Nvme2n1 00:08:05.069 Test: blockdev write read block ...passed 00:08:05.069 Test: blockdev write zeroes read block ...passed 00:08:05.069 Test: blockdev write zeroes read no split ...passed 00:08:05.069 Test: blockdev write zeroes read split ...passed 00:08:05.069 Test: blockdev write zeroes read split partial ...passed 00:08:05.069 Test: blockdev reset ...[2024-12-11 13:05:56.517579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:05.069 [2024-12-11 13:05:56.521935] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:05.069 passed 00:08:05.069 Test: blockdev write read 8 blocks ...passed 00:08:05.069 Test: blockdev write read size > 128k ...passed 00:08:05.069 Test: blockdev write read invalid size ...passed 00:08:05.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.069 Test: blockdev write read max offset ...passed 00:08:05.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.069 Test: blockdev writev readv 8 blocks ...passed 00:08:05.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.069 Test: blockdev writev readv block ...passed 00:08:05.069 Test: blockdev writev readv size > 128k ...passed 00:08:05.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.069 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.531742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba34000 len:0x1000 00:08:05.069 [2024-12-11 13:05:56.531812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.069 passed 00:08:05.069 Test: blockdev nvme passthru rw ...passed 00:08:05.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.069 Test: blockdev nvme admin passthru ...[2024-12-11 13:05:56.532866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.069 [2024-12-11 13:05:56.532909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.069 passed 00:08:05.069 Test: blockdev copy ...passed 00:08:05.069 Suite: bdevio tests on: Nvme1n1p2 00:08:05.069 Test: blockdev write read block ...passed 00:08:05.069 Test: blockdev write zeroes read block ...passed 00:08:05.069 Test: blockdev write zeroes read no split ...passed 00:08:05.069 Test: blockdev write zeroes read split ...passed 00:08:05.069 Test: blockdev write zeroes read split partial ...passed 00:08:05.069 Test: blockdev reset ...[2024-12-11 13:05:56.609974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:05.069 [2024-12-11 13:05:56.613938] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:05.069 passed 00:08:05.069 Test: blockdev write read 8 blocks ...passed 00:08:05.069 Test: blockdev write read size > 128k ...passed 00:08:05.069 Test: blockdev write read invalid size ...passed 00:08:05.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.069 Test: blockdev write read max offset ...passed 00:08:05.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.069 Test: blockdev writev readv 8 blocks ...passed 00:08:05.069 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.069 Test: blockdev writev readv block ...passed 00:08:05.069 Test: blockdev writev readv size > 128k ...passed 00:08:05.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.069 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.625371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cba30000 len:0x1000 00:08:05.069 [2024-12-11 13:05:56.625595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.069 passed 00:08:05.069 Test: blockdev nvme passthru rw ...passed 00:08:05.069 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.069 Test: blockdev nvme admin passthru ...passed 00:08:05.069 Test: blockdev copy ...passed 00:08:05.069 Suite: bdevio tests on: Nvme1n1p1 00:08:05.069 Test: blockdev write read block ...passed 00:08:05.069 Test: blockdev write zeroes read block ...passed 00:08:05.329 Test: blockdev write zeroes read no split ...passed 00:08:05.329 Test: blockdev write zeroes read split ...passed 00:08:05.329 Test: blockdev write zeroes read split partial ...passed 00:08:05.329 Test: blockdev reset ...[2024-12-11 13:05:56.697271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:05.329 [2024-12-11 13:05:56.701209] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:05.329 passed 00:08:05.329 Test: blockdev write read 8 blocks ...passed 00:08:05.329 Test: blockdev write read size > 128k ...passed 00:08:05.329 Test: blockdev write read invalid size ...passed 00:08:05.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.329 Test: blockdev write read max offset ...passed 00:08:05.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.330 Test: blockdev writev readv 8 blocks ...passed 00:08:05.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.330 Test: blockdev writev readv block ...passed 00:08:05.330 Test: blockdev writev readv size > 128k ...passed 00:08:05.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.330 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.711211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b7e0e000 len:0x1000 00:08:05.330 [2024-12-11 13:05:56.711259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.330 passed 00:08:05.330 Test: blockdev nvme passthru rw ...passed 00:08:05.330 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.330 Test: blockdev nvme admin passthru ...passed 00:08:05.330 Test: blockdev copy ...passed 00:08:05.330 Suite: bdevio tests on: Nvme0n1 00:08:05.330 Test: blockdev write read block ...passed 00:08:05.330 Test: blockdev write zeroes read block ...passed 00:08:05.330 Test: blockdev write zeroes read no split ...passed 00:08:05.330 Test: blockdev write zeroes read split ...passed 00:08:05.330 Test: blockdev write zeroes read split partial ...passed 00:08:05.330 Test: blockdev reset ...[2024-12-11 13:05:56.782061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:05.330 [2024-12-11 13:05:56.786223] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:05.330 passed 00:08:05.330 Test: blockdev write read 8 blocks ...passed 00:08:05.330 Test: blockdev write read size > 128k ...passed 00:08:05.330 Test: blockdev write read invalid size ...passed 00:08:05.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.330 Test: blockdev write read max offset ...passed 00:08:05.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.330 Test: blockdev writev readv 8 blocks ...passed 00:08:05.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.330 Test: blockdev writev readv block ...passed 00:08:05.330 Test: blockdev writev readv size > 128k ...passed 00:08:05.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.330 Test: blockdev comparev and writev ...[2024-12-11 13:05:56.794653] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:05.330 separate metadata which is not supported yet. 00:08:05.330 passed 00:08:05.330 Test: blockdev nvme passthru rw ...passed 00:08:05.330 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.330 Test: blockdev nvme admin passthru ...[2024-12-11 13:05:56.795530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:05.330 [2024-12-11 13:05:56.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:05.330 passed 00:08:05.330 Test: blockdev copy ...passed 00:08:05.330 00:08:05.330 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.330 suites 7 7 n/a 0 0 00:08:05.330 tests 161 161 161 0 0 00:08:05.330 asserts 1025 1025 1025 0 n/a 00:08:05.330 00:08:05.330 Elapsed time = 1.714 seconds 00:08:05.330 0 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63929 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63929 ']' 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63929 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63929 00:08:05.330 killing process with pid 63929 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63929' 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63929 00:08:05.330 13:05:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63929 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:06.708 00:08:06.708 real 0m3.192s 00:08:06.708 user 0m8.025s 00:08:06.708 sys 0m0.534s 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.708 ************************************ 00:08:06.708 END TEST bdev_bounds 00:08:06.708 ************************************ 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:06.708 13:05:58 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:06.708 13:05:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.708 13:05:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.708 13:05:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:06.708 ************************************ 00:08:06.708 START TEST bdev_nbd 00:08:06.708 ************************************ 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63994 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63994 /var/tmp/spdk-nbd.sock 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63994 ']' 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.708 13:05:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:06.708 [2024-12-11 13:05:58.245653] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:06.708 [2024-12-11 13:05:58.245930] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.968 [2024-12-11 13:05:58.434781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.226 [2024-12-11 13:05:58.579900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.164 1+0 records in 00:08:08.164 1+0 records out 00:08:08.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102835 s, 4.0 MB/s 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.164 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.424 1+0 records in 00:08:08.424 1+0 records out 00:08:08.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725402 s, 5.6 MB/s 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.424 13:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.683 1+0 records in 00:08:08.683 1+0 records out 00:08:08.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661181 s, 6.2 MB/s 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.683 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.942 1+0 records in 00:08:08.942 1+0 records out 00:08:08.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770806 s, 5.3 MB/s 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.942 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.201 1+0 records in 00:08:09.201 1+0 records out 00:08:09.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661448 s, 6.2 MB/s 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:09.201 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.460 1+0 records in 00:08:09.460 1+0 records out 00:08:09.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828608 s, 4.9 MB/s 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:09.460 13:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.719 1+0 records in 00:08:09.719 1+0 records out 00:08:09.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691882 s, 5.9 MB/s 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:09.719 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd0", 00:08:09.978 "bdev_name": "Nvme0n1" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd1", 00:08:09.978 "bdev_name": "Nvme1n1p1" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd2", 00:08:09.978 "bdev_name": "Nvme1n1p2" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd3", 00:08:09.978 "bdev_name": "Nvme2n1" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd4", 00:08:09.978 "bdev_name": "Nvme2n2" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd5", 00:08:09.978 "bdev_name": "Nvme2n3" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd6", 00:08:09.978 "bdev_name": "Nvme3n1" 00:08:09.978 } 00:08:09.978 ]' 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd0", 00:08:09.978 "bdev_name": "Nvme0n1" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd1", 00:08:09.978 "bdev_name": "Nvme1n1p1" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd2", 00:08:09.978 "bdev_name": "Nvme1n1p2" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd3", 00:08:09.978 "bdev_name": "Nvme2n1" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd4", 00:08:09.978 "bdev_name": "Nvme2n2" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd5", 00:08:09.978 "bdev_name": "Nvme2n3" 00:08:09.978 }, 00:08:09.978 { 00:08:09.978 "nbd_device": "/dev/nbd6", 00:08:09.978 "bdev_name": "Nvme3n1" 00:08:09.978 } 00:08:09.978 ]' 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.978 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.547 13:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.547 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.806 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.064 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.065 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.065 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.323 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.654 13:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.654 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:11.912 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:12.171 /dev/nbd0 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:12.171 1+0 records in 00:08:12.171 1+0 records out 00:08:12.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472517 s, 8.7 MB/s 00:08:12.171 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:12.431 /dev/nbd1 00:08:12.431 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:12.691 13:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:12.691 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:12.691 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:12.691 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.691 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.691 13:06:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:12.691 1+0 records in 00:08:12.691 1+0 records out 00:08:12.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729344 s, 5.6 MB/s 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:12.691 /dev/nbd10 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:12.691 1+0 records in 00:08:12.691 1+0 records out 00:08:12.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695519 s, 5.9 MB/s 00:08:12.691 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:12.951 /dev/nbd11 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:12.951 1+0 records in 00:08:12.951 1+0 records out 00:08:12.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061133 s, 6.7 MB/s 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:12.951 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:13.211 /dev/nbd12 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.211 1+0 records in 00:08:13.211 1+0 records out 00:08:13.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707879 s, 5.8 MB/s 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:13.211 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:13.470 /dev/nbd13 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.470 1+0 records in 00:08:13.470 1+0 records out 00:08:13.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766902 s, 5.3 MB/s 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:13.470 13:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:13.738 /dev/nbd14 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.738 1+0 records in 00:08:13.738 1+0 records out 00:08:13.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735034 s, 5.6 MB/s 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.738 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.997 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:13.997 { 00:08:13.997 "nbd_device": "/dev/nbd0", 00:08:13.997 "bdev_name": "Nvme0n1" 00:08:13.997 }, 00:08:13.997 { 00:08:13.997 "nbd_device": "/dev/nbd1", 00:08:13.997 "bdev_name": "Nvme1n1p1" 00:08:13.997 }, 00:08:13.997 { 00:08:13.997 "nbd_device": "/dev/nbd10", 00:08:13.997 "bdev_name": "Nvme1n1p2" 00:08:13.997 }, 00:08:13.997 { 00:08:13.997 "nbd_device": "/dev/nbd11", 00:08:13.997 "bdev_name": "Nvme2n1" 00:08:13.997 }, 00:08:13.997 { 00:08:13.997 "nbd_device": "/dev/nbd12", 00:08:13.997 "bdev_name": "Nvme2n2" 00:08:13.997 }, 00:08:13.997 { 00:08:13.997 "nbd_device": "/dev/nbd13", 00:08:13.998 "bdev_name": "Nvme2n3" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd14", 00:08:13.998 "bdev_name": "Nvme3n1" 00:08:13.998 } 00:08:13.998 ]' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd0", 00:08:13.998 "bdev_name": "Nvme0n1" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd1", 00:08:13.998 "bdev_name": "Nvme1n1p1" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd10", 00:08:13.998 "bdev_name": "Nvme1n1p2" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd11", 00:08:13.998 "bdev_name": "Nvme2n1" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd12", 00:08:13.998 "bdev_name": "Nvme2n2" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd13", 00:08:13.998 "bdev_name": "Nvme2n3" 00:08:13.998 }, 00:08:13.998 { 00:08:13.998 "nbd_device": "/dev/nbd14", 00:08:13.998 "bdev_name": "Nvme3n1" 00:08:13.998 } 00:08:13.998 ]' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:13.998 /dev/nbd1 00:08:13.998 /dev/nbd10 00:08:13.998 /dev/nbd11 00:08:13.998 /dev/nbd12 00:08:13.998 /dev/nbd13 00:08:13.998 /dev/nbd14' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:13.998 /dev/nbd1 00:08:13.998 /dev/nbd10 00:08:13.998 /dev/nbd11 00:08:13.998 /dev/nbd12 00:08:13.998 /dev/nbd13 00:08:13.998 /dev/nbd14' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:13.998 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:13.998 256+0 records in 00:08:13.998 256+0 records out 00:08:13.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075493 s, 139 MB/s 00:08:14.257 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.257 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:14.257 256+0 records in 00:08:14.257 256+0 records out 00:08:14.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139995 s, 7.5 MB/s 00:08:14.257 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.257 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:14.516 256+0 records in 00:08:14.516 256+0 records out 00:08:14.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154219 s, 6.8 MB/s 00:08:14.516 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.516 13:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:14.516 256+0 records in 00:08:14.516 256+0 records out 00:08:14.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152367 s, 6.9 MB/s 00:08:14.516 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.516 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:14.775 256+0 records in 00:08:14.775 256+0 records out 00:08:14.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149895 s, 7.0 MB/s 00:08:14.775 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.775 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:14.775 256+0 records in 00:08:14.775 256+0 records out 00:08:14.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145242 s, 7.2 MB/s 00:08:14.775 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.775 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:15.035 256+0 records in 00:08:15.035 256+0 records out 00:08:15.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147577 s, 7.1 MB/s 00:08:15.035 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.035 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:15.295 256+0 records in 00:08:15.295 256+0 records out 00:08:15.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149749 s, 7.0 MB/s 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.295 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.555 13:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:15.814 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.075 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:16.334 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.334 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.334 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.334 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:16.334 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.335 13:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.593 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.852 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:17.112 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:17.372 malloc_lvol_verify 00:08:17.372 13:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:17.631 6a6bbbb8-5041-4d5a-9f5a-813bccedf4d4 00:08:17.631 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:17.891 b126350b-381e-4062-8083-909b9183efac 00:08:17.891 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:18.151 /dev/nbd0 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:18.151 mke2fs 1.47.0 (5-Feb-2023) 00:08:18.151 Discarding device blocks: 0/4096 done 00:08:18.151 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:18.151 00:08:18.151 Allocating group tables: 0/1 done 00:08:18.151 Writing inode tables: 0/1 done 00:08:18.151 Creating journal (1024 blocks): done 00:08:18.151 Writing superblocks and filesystem accounting information: 0/1 done 00:08:18.151 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:18.151 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63994 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63994 ']' 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63994 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63994 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63994' 00:08:18.410 killing process with pid 63994 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63994 00:08:18.410 13:06:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63994 00:08:19.815 13:06:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:19.815 00:08:19.815 real 0m12.881s 00:08:19.815 user 0m16.484s 00:08:19.815 sys 0m5.482s 00:08:19.815 ************************************ 00:08:19.815 END TEST bdev_nbd 00:08:19.815 ************************************ 00:08:19.815 13:06:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.815 13:06:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:19.815 13:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:19.815 13:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:19.815 13:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:19.815 skipping fio tests on NVMe due to multi-ns failures. 00:08:19.815 13:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:19.815 13:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:19.815 13:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:19.815 13:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:19.815 13:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.815 13:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.815 ************************************ 00:08:19.815 START TEST bdev_verify 00:08:19.815 ************************************ 00:08:19.815 13:06:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:19.815 [2024-12-11 13:06:11.187822] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:19.815 [2024-12-11 13:06:11.187957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64420 ] 00:08:19.815 [2024-12-11 13:06:11.376500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.075 [2024-12-11 13:06:11.529024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.075 [2024-12-11 13:06:11.529076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.011 Running I/O for 5 seconds... 00:08:23.326 17920.00 IOPS, 70.00 MiB/s [2024-12-11T13:06:15.831Z] 19968.00 IOPS, 78.00 MiB/s [2024-12-11T13:06:16.767Z] 19200.00 IOPS, 75.00 MiB/s [2024-12-11T13:06:17.705Z] 19536.00 IOPS, 76.31 MiB/s [2024-12-11T13:06:17.705Z] 19558.40 IOPS, 76.40 MiB/s 00:08:26.137 Latency(us) 00:08:26.137 [2024-12-11T13:06:17.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.137 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0xbd0bd 00:08:26.137 Nvme0n1 : 5.08 1372.76 5.36 0.00 0.00 92718.82 12633.45 94750.84 00:08:26.137 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:26.137 Nvme0n1 : 5.08 1373.40 5.36 0.00 0.00 92639.99 19055.45 90118.58 00:08:26.137 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0x4ff80 00:08:26.137 Nvme1n1p1 : 5.09 1371.85 5.36 0.00 0.00 92571.19 14002.07 87591.89 00:08:26.137 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:26.137 Nvme1n1p1 : 5.10 1380.93 5.39 0.00 0.00 92309.05 15054.86 82538.51 00:08:26.137 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0x4ff7f 00:08:26.137 Nvme1n1p2 : 5.10 1380.42 5.39 0.00 0.00 92069.06 10843.71 73273.99 00:08:26.137 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:26.137 Nvme1n1p2 : 5.10 1380.22 5.39 0.00 0.00 92134.63 16634.04 72852.87 00:08:26.137 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0x80000 00:08:26.137 Nvme2n1 : 5.10 1379.75 5.39 0.00 0.00 91927.36 12370.25 68641.72 00:08:26.137 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x80000 length 0x80000 00:08:26.137 Nvme2n1 : 5.10 1379.91 5.39 0.00 0.00 92004.77 16739.32 76221.79 00:08:26.137 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0x80000 00:08:26.137 Nvme2n2 : 5.10 1379.32 5.39 0.00 0.00 91763.26 12370.25 67378.38 00:08:26.137 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x80000 length 0x80000 00:08:26.137 Nvme2n2 : 5.10 1379.57 5.39 0.00 0.00 91856.52 16949.87 78327.36 00:08:26.137 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0x80000 00:08:26.137 Nvme2n3 : 5.11 1378.87 5.39 0.00 0.00 91606.02 12791.36 68641.72 00:08:26.137 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x80000 length 0x80000 00:08:26.137 Nvme2n3 : 5.10 1379.14 5.39 0.00 0.00 91689.72 17160.43 79169.59 00:08:26.137 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x0 length 0x20000 00:08:26.137 Nvme3n1 : 5.11 1378.43 5.38 0.00 0.00 91465.26 13107.20 72852.87 00:08:26.137 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.137 Verification LBA range: start 0x20000 length 0x20000 00:08:26.137 Nvme3n1 : 5.11 1378.72 5.39 0.00 0.00 91527.25 16949.87 78748.48 00:08:26.137 [2024-12-11T13:06:17.705Z] =================================================================================================================== 00:08:26.137 [2024-12-11T13:06:17.705Z] Total : 19293.30 75.36 0.00 0.00 92018.99 10843.71 94750.84 00:08:27.540 00:08:27.540 real 0m7.986s 00:08:27.540 user 0m14.631s 00:08:27.540 sys 0m0.421s 00:08:27.540 13:06:19 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.540 13:06:19 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:27.540 ************************************ 00:08:27.540 END TEST bdev_verify 00:08:27.540 ************************************ 00:08:27.800 13:06:19 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.800 13:06:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:27.800 13:06:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.800 13:06:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.800 ************************************ 00:08:27.800 START TEST bdev_verify_big_io 00:08:27.800 ************************************ 00:08:27.800 13:06:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.800 [2024-12-11 13:06:19.252682] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:27.800 [2024-12-11 13:06:19.252827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64530 ] 00:08:28.059 [2024-12-11 13:06:19.427677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.060 [2024-12-11 13:06:19.574597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.060 [2024-12-11 13:06:19.574630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.996 Running I/O for 5 seconds... 00:08:34.574 2938.00 IOPS, 183.62 MiB/s [2024-12-11T13:06:26.400Z] 4311.00 IOPS, 269.44 MiB/s 00:08:34.832 Latency(us) 00:08:34.832 [2024-12-11T13:06:26.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.833 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0xbd0b 00:08:34.833 Nvme0n1 : 5.71 179.03 11.19 0.00 0.00 686536.05 21266.30 916345.93 00:08:34.833 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:34.833 Nvme0n1 : 5.68 95.73 5.98 0.00 0.00 1288068.82 31794.17 1354305.39 00:08:34.833 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0x4ff8 00:08:34.833 Nvme1n1p1 : 5.71 180.03 11.25 0.00 0.00 666221.94 78327.36 768113.50 00:08:34.833 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:34.833 Nvme1n1p1 : 5.62 102.14 6.38 0.00 0.00 1185362.03 96014.19 1165645.93 00:08:34.833 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0x4ff7 00:08:34.833 Nvme1n1p2 : 5.76 183.63 11.48 0.00 0.00 642007.73 90960.81 680521.61 00:08:34.833 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:34.833 Nvme1n1p2 : 5.74 106.35 6.65 0.00 0.00 1114199.16 63167.23 1428421.60 00:08:34.833 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0x8000 00:08:34.833 Nvme2n1 : 5.76 186.16 11.63 0.00 0.00 625063.89 43164.27 592929.72 00:08:34.833 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x8000 length 0x8000 00:08:34.833 Nvme2n1 : 5.74 106.51 6.66 0.00 0.00 1086693.10 63588.34 1455372.95 00:08:34.833 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0x8000 00:08:34.833 Nvme2n2 : 5.80 185.39 11.59 0.00 0.00 615022.68 15370.69 1179121.61 00:08:34.833 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x8000 length 0x8000 00:08:34.833 Nvme2n2 : 5.76 111.55 6.97 0.00 0.00 1021512.33 48007.09 1468848.63 00:08:34.833 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0x8000 00:08:34.833 Nvme2n3 : 5.82 190.32 11.89 0.00 0.00 585100.44 22424.37 1192597.28 00:08:34.833 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x8000 length 0x8000 00:08:34.833 Nvme2n3 : 5.79 121.41 7.59 0.00 0.00 923213.84 23056.04 1462110.79 00:08:34.833 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x0 length 0x2000 00:08:34.833 Nvme3n1 : 5.84 207.07 12.94 0.00 0.00 526240.92 4711.22 1212810.80 00:08:34.833 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:34.833 Verification LBA range: start 0x2000 length 0x2000 00:08:34.833 Nvme3n1 : 5.81 132.45 8.28 0.00 0.00 827319.18 3763.71 1509275.66 00:08:34.833 [2024-12-11T13:06:26.401Z] =================================================================================================================== 00:08:34.833 [2024-12-11T13:06:26.401Z] Total : 2087.76 130.48 0.00 0.00 777367.42 3763.71 1509275.66 00:08:37.371 00:08:37.371 real 0m9.278s 00:08:37.371 user 0m17.167s 00:08:37.371 sys 0m0.463s 00:08:37.371 13:06:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.371 13:06:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:37.371 ************************************ 00:08:37.371 END TEST bdev_verify_big_io 00:08:37.371 ************************************ 00:08:37.371 13:06:28 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.371 13:06:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:37.371 13:06:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.371 13:06:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.371 ************************************ 00:08:37.371 START TEST bdev_write_zeroes 00:08:37.371 ************************************ 00:08:37.371 13:06:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.371 [2024-12-11 13:06:28.609011] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:37.371 [2024-12-11 13:06:28.609155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64645 ] 00:08:37.371 [2024-12-11 13:06:28.798439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.371 [2024-12-11 13:06:28.934915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.308 Running I/O for 1 seconds... 00:08:39.241 65856.00 IOPS, 257.25 MiB/s 00:08:39.241 Latency(us) 00:08:39.241 [2024-12-11T13:06:30.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.241 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme0n1 : 1.02 9372.78 36.61 0.00 0.00 13628.20 7053.67 30741.38 00:08:39.241 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme1n1p1 : 1.03 9363.59 36.58 0.00 0.00 13624.34 12264.97 31373.06 00:08:39.241 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme1n1p2 : 1.03 9354.40 36.54 0.00 0.00 13599.36 11370.10 30741.38 00:08:39.241 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme2n1 : 1.03 9345.42 36.51 0.00 0.00 13531.98 11317.46 25582.73 00:08:39.241 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme2n2 : 1.03 9336.66 36.47 0.00 0.00 13491.40 11580.66 22424.37 00:08:39.241 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme2n3 : 1.03 9328.27 36.44 0.00 0.00 13461.78 10159.40 22213.81 00:08:39.241 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.241 Nvme3n1 : 1.03 9319.82 36.41 0.00 0.00 13441.26 8632.85 23477.15 00:08:39.241 [2024-12-11T13:06:30.809Z] =================================================================================================================== 00:08:39.241 [2024-12-11T13:06:30.809Z] Total : 65420.93 255.55 0.00 0.00 13539.76 7053.67 31373.06 00:08:40.619 00:08:40.619 real 0m3.571s 00:08:40.619 user 0m3.084s 00:08:40.619 sys 0m0.368s 00:08:40.619 13:06:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.619 13:06:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:40.619 ************************************ 00:08:40.619 END TEST bdev_write_zeroes 00:08:40.619 ************************************ 00:08:40.619 13:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.619 13:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:40.619 13:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.619 13:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.619 ************************************ 00:08:40.619 START TEST bdev_json_nonenclosed 00:08:40.619 ************************************ 00:08:40.619 13:06:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.879 [2024-12-11 13:06:32.239774] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:40.879 [2024-12-11 13:06:32.239899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64703 ] 00:08:40.879 [2024-12-11 13:06:32.421276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.138 [2024-12-11 13:06:32.558677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.138 [2024-12-11 13:06:32.558810] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:41.138 [2024-12-11 13:06:32.558836] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:41.138 [2024-12-11 13:06:32.558850] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.397 00:08:41.397 real 0m0.678s 00:08:41.397 user 0m0.427s 00:08:41.397 sys 0m0.147s 00:08:41.397 13:06:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.397 ************************************ 00:08:41.397 END TEST bdev_json_nonenclosed 00:08:41.397 ************************************ 00:08:41.397 13:06:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:41.397 13:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.397 13:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:41.397 13:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.397 13:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.397 ************************************ 00:08:41.397 START TEST bdev_json_nonarray 00:08:41.397 ************************************ 00:08:41.397 13:06:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.656 [2024-12-11 13:06:32.977949] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:41.656 [2024-12-11 13:06:32.978076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64733 ] 00:08:41.656 [2024-12-11 13:06:33.161528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.915 [2024-12-11 13:06:33.300843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.915 [2024-12-11 13:06:33.300959] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:41.915 [2024-12-11 13:06:33.300990] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:41.915 [2024-12-11 13:06:33.301003] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.175 00:08:42.175 real 0m0.697s 00:08:42.175 user 0m0.427s 00:08:42.175 sys 0m0.166s 00:08:42.175 ************************************ 00:08:42.175 END TEST bdev_json_nonarray 00:08:42.175 ************************************ 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:42.175 13:06:33 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:42.175 13:06:33 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:42.175 13:06:33 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:42.175 13:06:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.175 13:06:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.175 13:06:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:42.175 ************************************ 00:08:42.175 START TEST bdev_gpt_uuid 00:08:42.175 ************************************ 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64760 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64760 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64760 ']' 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.175 13:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 [2024-12-11 13:06:33.756305] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:42.435 [2024-12-11 13:06:33.756428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64760 ] 00:08:42.435 [2024-12-11 13:06:33.936513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.694 [2024-12-11 13:06:34.082623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.631 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.631 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:43.631 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:43.631 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.631 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:44.198 Some configs were skipped because the RPC state that can call them passed over. 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.198 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:44.199 { 00:08:44.199 "name": "Nvme1n1p1", 00:08:44.199 "aliases": [ 00:08:44.199 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:44.199 ], 00:08:44.199 "product_name": "GPT Disk", 00:08:44.199 "block_size": 4096, 00:08:44.199 "num_blocks": 655104, 00:08:44.199 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:44.199 "assigned_rate_limits": { 00:08:44.199 "rw_ios_per_sec": 0, 00:08:44.199 "rw_mbytes_per_sec": 0, 00:08:44.199 "r_mbytes_per_sec": 0, 00:08:44.199 "w_mbytes_per_sec": 0 00:08:44.199 }, 00:08:44.199 "claimed": false, 00:08:44.199 "zoned": false, 00:08:44.199 "supported_io_types": { 00:08:44.199 "read": true, 00:08:44.199 "write": true, 00:08:44.199 "unmap": true, 00:08:44.199 "flush": true, 00:08:44.199 "reset": true, 00:08:44.199 "nvme_admin": false, 00:08:44.199 "nvme_io": false, 00:08:44.199 "nvme_io_md": false, 00:08:44.199 "write_zeroes": true, 00:08:44.199 "zcopy": false, 00:08:44.199 "get_zone_info": false, 00:08:44.199 "zone_management": false, 00:08:44.199 "zone_append": false, 00:08:44.199 "compare": true, 00:08:44.199 "compare_and_write": false, 00:08:44.199 "abort": true, 00:08:44.199 "seek_hole": false, 00:08:44.199 "seek_data": false, 00:08:44.199 "copy": true, 00:08:44.199 "nvme_iov_md": false 00:08:44.199 }, 00:08:44.199 "driver_specific": { 00:08:44.199 "gpt": { 00:08:44.199 "base_bdev": "Nvme1n1", 00:08:44.199 "offset_blocks": 256, 00:08:44.199 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:44.199 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:44.199 "partition_name": "SPDK_TEST_first" 00:08:44.199 } 00:08:44.199 } 00:08:44.199 } 00:08:44.199 ]' 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:44.199 { 00:08:44.199 "name": "Nvme1n1p2", 00:08:44.199 "aliases": [ 00:08:44.199 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:44.199 ], 00:08:44.199 "product_name": "GPT Disk", 00:08:44.199 "block_size": 4096, 00:08:44.199 "num_blocks": 655103, 00:08:44.199 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:44.199 "assigned_rate_limits": { 00:08:44.199 "rw_ios_per_sec": 0, 00:08:44.199 "rw_mbytes_per_sec": 0, 00:08:44.199 "r_mbytes_per_sec": 0, 00:08:44.199 "w_mbytes_per_sec": 0 00:08:44.199 }, 00:08:44.199 "claimed": false, 00:08:44.199 "zoned": false, 00:08:44.199 "supported_io_types": { 00:08:44.199 "read": true, 00:08:44.199 "write": true, 00:08:44.199 "unmap": true, 00:08:44.199 "flush": true, 00:08:44.199 "reset": true, 00:08:44.199 "nvme_admin": false, 00:08:44.199 "nvme_io": false, 00:08:44.199 "nvme_io_md": false, 00:08:44.199 "write_zeroes": true, 00:08:44.199 "zcopy": false, 00:08:44.199 "get_zone_info": false, 00:08:44.199 "zone_management": false, 00:08:44.199 "zone_append": false, 00:08:44.199 "compare": true, 00:08:44.199 "compare_and_write": false, 00:08:44.199 "abort": true, 00:08:44.199 "seek_hole": false, 00:08:44.199 "seek_data": false, 00:08:44.199 "copy": true, 00:08:44.199 "nvme_iov_md": false 00:08:44.199 }, 00:08:44.199 "driver_specific": { 00:08:44.199 "gpt": { 00:08:44.199 "base_bdev": "Nvme1n1", 00:08:44.199 "offset_blocks": 655360, 00:08:44.199 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:44.199 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:44.199 "partition_name": "SPDK_TEST_second" 00:08:44.199 } 00:08:44.199 } 00:08:44.199 } 00:08:44.199 ]' 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:44.199 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64760 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64760 ']' 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64760 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64760 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.458 killing process with pid 64760 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64760' 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64760 00:08:44.458 13:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64760 00:08:46.995 00:08:46.995 real 0m4.808s 00:08:46.995 user 0m4.723s 00:08:46.995 sys 0m0.721s 00:08:46.995 13:06:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.995 13:06:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:46.995 ************************************ 00:08:46.995 END TEST bdev_gpt_uuid 00:08:46.995 ************************************ 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:46.995 13:06:38 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.822 Waiting for block devices as requested 00:08:48.081 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.081 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.081 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.340 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.620 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:53.620 13:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:53.620 13:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:53.620 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:53.620 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:53.620 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:53.620 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:53.620 13:06:45 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:53.620 00:08:53.620 real 1m7.882s 00:08:53.620 user 1m23.636s 00:08:53.620 sys 0m13.253s 00:08:53.620 13:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.620 13:06:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.620 ************************************ 00:08:53.620 END TEST blockdev_nvme_gpt 00:08:53.620 ************************************ 00:08:53.620 13:06:45 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:53.620 13:06:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.620 13:06:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.620 13:06:45 -- common/autotest_common.sh@10 -- # set +x 00:08:53.620 ************************************ 00:08:53.620 START TEST nvme 00:08:53.620 ************************************ 00:08:53.620 13:06:45 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:53.880 * Looking for test storage... 00:08:53.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.880 13:06:45 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.880 13:06:45 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.880 13:06:45 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.880 13:06:45 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.880 13:06:45 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.880 13:06:45 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:53.880 13:06:45 nvme -- scripts/common.sh@345 -- # : 1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.880 13:06:45 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.880 13:06:45 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@353 -- # local d=1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.880 13:06:45 nvme -- scripts/common.sh@355 -- # echo 1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.880 13:06:45 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@353 -- # local d=2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.880 13:06:45 nvme -- scripts/common.sh@355 -- # echo 2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.880 13:06:45 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.880 13:06:45 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.880 13:06:45 nvme -- scripts/common.sh@368 -- # return 0 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.880 --rc genhtml_branch_coverage=1 00:08:53.880 --rc genhtml_function_coverage=1 00:08:53.880 --rc genhtml_legend=1 00:08:53.880 --rc geninfo_all_blocks=1 00:08:53.880 --rc geninfo_unexecuted_blocks=1 00:08:53.880 00:08:53.880 ' 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.880 --rc genhtml_branch_coverage=1 00:08:53.880 --rc genhtml_function_coverage=1 00:08:53.880 --rc genhtml_legend=1 00:08:53.880 --rc geninfo_all_blocks=1 00:08:53.880 --rc geninfo_unexecuted_blocks=1 00:08:53.880 00:08:53.880 ' 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.880 --rc genhtml_branch_coverage=1 00:08:53.880 --rc genhtml_function_coverage=1 00:08:53.880 --rc genhtml_legend=1 00:08:53.880 --rc geninfo_all_blocks=1 00:08:53.880 --rc geninfo_unexecuted_blocks=1 00:08:53.880 00:08:53.880 ' 00:08:53.880 13:06:45 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.880 --rc genhtml_branch_coverage=1 00:08:53.881 --rc genhtml_function_coverage=1 00:08:53.881 --rc genhtml_legend=1 00:08:53.881 --rc geninfo_all_blocks=1 00:08:53.881 --rc geninfo_unexecuted_blocks=1 00:08:53.881 00:08:53.881 ' 00:08:53.881 13:06:45 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.441 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.441 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.441 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.701 13:06:47 nvme -- nvme/nvme.sh@79 -- # uname 00:08:55.701 13:06:47 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:55.701 13:06:47 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:55.701 13:06:47 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1075 -- # stubpid=65429 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:55.701 Waiting for stub to ready for secondary processes... 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65429 ]] 00:08:55.701 13:06:47 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:55.701 [2024-12-11 13:06:47.133489] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:08:55.701 [2024-12-11 13:06:47.133613] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:56.638 13:06:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:56.638 13:06:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65429 ]] 00:08:56.638 13:06:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:57.575 [2024-12-11 13:06:48.793969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.575 [2024-12-11 13:06:48.920851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.575 [2024-12-11 13:06:48.921029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.575 [2024-12-11 13:06:48.921079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.575 [2024-12-11 13:06:48.939222] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:57.575 [2024-12-11 13:06:48.939259] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:57.575 [2024-12-11 13:06:48.955488] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:57.575 [2024-12-11 13:06:48.955615] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:57.575 [2024-12-11 13:06:48.958808] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:57.575 [2024-12-11 13:06:48.959005] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:57.575 [2024-12-11 13:06:48.959075] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:57.575 [2024-12-11 13:06:48.962271] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:57.575 [2024-12-11 13:06:48.962497] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:57.575 [2024-12-11 13:06:48.962595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:57.575 [2024-12-11 13:06:48.966257] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:57.575 [2024-12-11 13:06:48.966504] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:57.575 [2024-12-11 13:06:48.966599] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:57.576 [2024-12-11 13:06:48.966676] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:57.576 [2024-12-11 13:06:48.966753] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:57.576 done. 00:08:57.576 13:06:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:57.576 13:06:49 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:57.576 13:06:49 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:57.576 13:06:49 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:57.576 13:06:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.576 13:06:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.576 ************************************ 00:08:57.576 START TEST nvme_reset 00:08:57.576 ************************************ 00:08:57.576 13:06:49 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:57.834 Initializing NVMe Controllers 00:08:57.834 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:57.834 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:57.834 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:57.834 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:57.834 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:57.834 00:08:57.834 real 0m0.288s 00:08:57.834 user 0m0.092s 00:08:57.834 sys 0m0.155s 00:08:57.834 13:06:49 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.834 13:06:49 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:57.834 ************************************ 00:08:57.834 END TEST nvme_reset 00:08:57.834 ************************************ 00:08:58.093 13:06:49 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:58.093 13:06:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.093 13:06:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.093 13:06:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 ************************************ 00:08:58.093 START TEST nvme_identify 00:08:58.093 ************************************ 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:58.093 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:58.093 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:58.093 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:58.093 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:58.093 13:06:49 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.093 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:58.355 [2024-12-11 13:06:49.854070] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 65459 terminated unexpected 00:08:58.355 ===================================================== 00:08:58.355 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.355 ===================================================== 00:08:58.355 Controller Capabilities/Features 00:08:58.355 ================================ 00:08:58.355 Vendor ID: 1b36 00:08:58.355 Subsystem Vendor ID: 1af4 00:08:58.355 Serial Number: 12340 00:08:58.355 Model Number: QEMU NVMe Ctrl 00:08:58.355 Firmware Version: 8.0.0 00:08:58.355 Recommended Arb Burst: 6 00:08:58.355 IEEE OUI Identifier: 00 54 52 00:08:58.355 Multi-path I/O 00:08:58.355 May have multiple subsystem ports: No 00:08:58.355 May have multiple controllers: No 00:08:58.355 Associated with SR-IOV VF: No 00:08:58.355 Max Data Transfer Size: 524288 00:08:58.355 Max Number of Namespaces: 256 00:08:58.355 Max Number of I/O Queues: 64 00:08:58.355 NVMe Specification Version (VS): 1.4 00:08:58.355 NVMe Specification Version (Identify): 1.4 00:08:58.355 Maximum Queue Entries: 2048 00:08:58.355 Contiguous Queues Required: Yes 00:08:58.355 Arbitration Mechanisms Supported 00:08:58.355 Weighted Round Robin: Not Supported 00:08:58.355 Vendor Specific: Not Supported 00:08:58.355 Reset Timeout: 7500 ms 00:08:58.355 Doorbell Stride: 4 bytes 00:08:58.355 NVM Subsystem Reset: Not Supported 00:08:58.355 Command Sets Supported 00:08:58.355 NVM Command Set: Supported 00:08:58.355 Boot Partition: Not Supported 00:08:58.355 Memory Page Size Minimum: 4096 bytes 00:08:58.355 Memory Page Size Maximum: 65536 bytes 00:08:58.355 Persistent Memory Region: Not Supported 00:08:58.355 Optional Asynchronous Events Supported 00:08:58.355 Namespace Attribute Notices: Supported 00:08:58.355 Firmware Activation Notices: Not Supported 00:08:58.355 ANA Change Notices: Not Supported 00:08:58.355 PLE Aggregate Log Change Notices: Not Supported 00:08:58.355 LBA Status Info Alert Notices: Not Supported 00:08:58.355 EGE Aggregate Log Change Notices: Not Supported 00:08:58.355 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.355 Zone Descriptor Change Notices: Not Supported 00:08:58.355 Discovery Log Change Notices: Not Supported 00:08:58.355 Controller Attributes 00:08:58.355 128-bit Host Identifier: Not Supported 00:08:58.355 Non-Operational Permissive Mode: Not Supported 00:08:58.355 NVM Sets: Not Supported 00:08:58.355 Read Recovery Levels: Not Supported 00:08:58.355 Endurance Groups: Not Supported 00:08:58.355 Predictable Latency Mode: Not Supported 00:08:58.355 Traffic Based Keep ALive: Not Supported 00:08:58.355 Namespace Granularity: Not Supported 00:08:58.355 SQ Associations: Not Supported 00:08:58.355 UUID List: Not Supported 00:08:58.355 Multi-Domain Subsystem: Not Supported 00:08:58.355 Fixed Capacity Management: Not Supported 00:08:58.355 Variable Capacity Management: Not Supported 00:08:58.355 Delete Endurance Group: Not Supported 00:08:58.355 Delete NVM Set: Not Supported 00:08:58.355 Extended LBA Formats Supported: Supported 00:08:58.355 Flexible Data Placement Supported: Not Supported 00:08:58.355 00:08:58.355 Controller Memory Buffer Support 00:08:58.355 ================================ 00:08:58.356 Supported: No 00:08:58.356 00:08:58.356 Persistent Memory Region Support 00:08:58.356 ================================ 00:08:58.356 Supported: No 00:08:58.356 00:08:58.356 Admin Command Set Attributes 00:08:58.356 ============================ 00:08:58.356 Security Send/Receive: Not Supported 00:08:58.356 Format NVM: Supported 00:08:58.356 Firmware Activate/Download: Not Supported 00:08:58.356 Namespace Management: Supported 00:08:58.356 Device Self-Test: Not Supported 00:08:58.356 Directives: Supported 00:08:58.356 NVMe-MI: Not Supported 00:08:58.356 Virtualization Management: Not Supported 00:08:58.356 Doorbell Buffer Config: Supported 00:08:58.356 Get LBA Status Capability: Not Supported 00:08:58.356 Command & Feature Lockdown Capability: Not Supported 00:08:58.356 Abort Command Limit: 4 00:08:58.356 Async Event Request Limit: 4 00:08:58.356 Number of Firmware Slots: N/A 00:08:58.356 Firmware Slot 1 Read-Only: N/A 00:08:58.356 Firmware Activation Without Reset: N/A 00:08:58.356 Multiple Update Detection Support: N/A 00:08:58.356 Firmware Update Granularity: No Information Provided 00:08:58.356 Per-Namespace SMART Log: Yes 00:08:58.356 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.356 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:58.356 Command Effects Log Page: Supported 00:08:58.356 Get Log Page Extended Data: Supported 00:08:58.356 Telemetry Log Pages: Not Supported 00:08:58.356 Persistent Event Log Pages: Not Supported 00:08:58.356 Supported Log Pages Log Page: May Support 00:08:58.356 Commands Supported & Effects Log Page: Not Supported 00:08:58.356 Feature Identifiers & Effects Log Page:May Support 00:08:58.356 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.356 Data Area 4 for Telemetry Log: Not Supported 00:08:58.356 Error Log Page Entries Supported: 1 00:08:58.356 Keep Alive: Not Supported 00:08:58.356 00:08:58.356 NVM Command Set Attributes 00:08:58.356 ========================== 00:08:58.356 Submission Queue Entry Size 00:08:58.356 Max: 64 00:08:58.356 Min: 64 00:08:58.356 Completion Queue Entry Size 00:08:58.356 Max: 16 00:08:58.356 Min: 16 00:08:58.356 Number of Namespaces: 256 00:08:58.356 Compare Command: Supported 00:08:58.356 Write Uncorrectable Command: Not Supported 00:08:58.356 Dataset Management Command: Supported 00:08:58.356 Write Zeroes Command: Supported 00:08:58.356 Set Features Save Field: Supported 00:08:58.356 Reservations: Not Supported 00:08:58.356 Timestamp: Supported 00:08:58.356 Copy: Supported 00:08:58.356 Volatile Write Cache: Present 00:08:58.356 Atomic Write Unit (Normal): 1 00:08:58.356 Atomic Write Unit (PFail): 1 00:08:58.356 Atomic Compare & Write Unit: 1 00:08:58.356 Fused Compare & Write: Not Supported 00:08:58.356 Scatter-Gather List 00:08:58.356 SGL Command Set: Supported 00:08:58.356 SGL Keyed: Not Supported 00:08:58.356 SGL Bit Bucket Descriptor: Not Supported 00:08:58.356 SGL Metadata Pointer: Not Supported 00:08:58.356 Oversized SGL: Not Supported 00:08:58.356 SGL Metadata Address: Not Supported 00:08:58.356 SGL Offset: Not Supported 00:08:58.356 Transport SGL Data Block: Not Supported 00:08:58.356 Replay Protected Memory Block: Not Supported 00:08:58.356 00:08:58.356 Firmware Slot Information 00:08:58.356 ========================= 00:08:58.356 Active slot: 1 00:08:58.356 Slot 1 Firmware Revision: 1.0 00:08:58.356 00:08:58.356 00:08:58.356 Commands Supported and Effects 00:08:58.356 ============================== 00:08:58.356 Admin Commands 00:08:58.356 -------------- 00:08:58.356 Delete I/O Submission Queue (00h): Supported 00:08:58.356 Create I/O Submission Queue (01h): Supported 00:08:58.356 Get Log Page (02h): Supported 00:08:58.356 Delete I/O Completion Queue (04h): Supported 00:08:58.356 Create I/O Completion Queue (05h): Supported 00:08:58.356 Identify (06h): Supported 00:08:58.356 Abort (08h): Supported 00:08:58.356 Set Features (09h): Supported 00:08:58.356 Get Features (0Ah): Supported 00:08:58.356 Asynchronous Event Request (0Ch): Supported 00:08:58.356 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.356 Directive Send (19h): Supported 00:08:58.356 Directive Receive (1Ah): Supported 00:08:58.356 Virtualization Management (1Ch): Supported 00:08:58.356 Doorbell Buffer Config (7Ch): Supported 00:08:58.356 Format NVM (80h): Supported LBA-Change 00:08:58.356 I/O Commands 00:08:58.356 ------------ 00:08:58.356 Flush (00h): Supported LBA-Change 00:08:58.356 Write (01h): Supported LBA-Change 00:08:58.356 Read (02h): Supported 00:08:58.356 Compare (05h): Supported 00:08:58.356 Write Zeroes (08h): Supported LBA-Change 00:08:58.356 Dataset Management (09h): Supported LBA-Change 00:08:58.356 Unknown (0Ch): Supported 00:08:58.356 Unknown (12h): Supported 00:08:58.356 Copy (19h): Supported LBA-Change 00:08:58.356 Unknown (1Dh): Supported LBA-Change 00:08:58.356 00:08:58.356 Error Log 00:08:58.356 ========= 00:08:58.356 00:08:58.356 Arbitration 00:08:58.356 =========== 00:08:58.356 Arbitration Burst: no limit 00:08:58.356 00:08:58.356 Power Management 00:08:58.356 ================ 00:08:58.356 Number of Power States: 1 00:08:58.356 Current Power State: Power State #0 00:08:58.356 Power State #0: 00:08:58.356 Max Power: 25.00 W 00:08:58.356 Non-Operational State: Operational 00:08:58.356 Entry Latency: 16 microseconds 00:08:58.356 Exit Latency: 4 microseconds 00:08:58.356 Relative Read Throughput: 0 00:08:58.356 Relative Read Latency: 0 00:08:58.356 Relative Write Throughput: 0 00:08:58.356 Relative Write Latency: 0 00:08:58.356 Idle Power[2024-12-11 13:06:49.855528] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 65459 terminated unexpected 00:08:58.356 : Not Reported 00:08:58.356 Active Power: Not Reported 00:08:58.356 Non-Operational Permissive Mode: Not Supported 00:08:58.356 00:08:58.356 Health Information 00:08:58.356 ================== 00:08:58.356 Critical Warnings: 00:08:58.356 Available Spare Space: OK 00:08:58.356 Temperature: OK 00:08:58.356 Device Reliability: OK 00:08:58.356 Read Only: No 00:08:58.356 Volatile Memory Backup: OK 00:08:58.356 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.356 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.356 Available Spare: 0% 00:08:58.356 Available Spare Threshold: 0% 00:08:58.356 Life Percentage Used: 0% 00:08:58.356 Data Units Read: 737 00:08:58.356 Data Units Written: 665 00:08:58.356 Host Read Commands: 33018 00:08:58.356 Host Write Commands: 32804 00:08:58.356 Controller Busy Time: 0 minutes 00:08:58.356 Power Cycles: 0 00:08:58.356 Power On Hours: 0 hours 00:08:58.356 Unsafe Shutdowns: 0 00:08:58.356 Unrecoverable Media Errors: 0 00:08:58.356 Lifetime Error Log Entries: 0 00:08:58.356 Warning Temperature Time: 0 minutes 00:08:58.356 Critical Temperature Time: 0 minutes 00:08:58.356 00:08:58.356 Number of Queues 00:08:58.356 ================ 00:08:58.356 Number of I/O Submission Queues: 64 00:08:58.356 Number of I/O Completion Queues: 64 00:08:58.356 00:08:58.356 ZNS Specific Controller Data 00:08:58.356 ============================ 00:08:58.356 Zone Append Size Limit: 0 00:08:58.356 00:08:58.356 00:08:58.356 Active Namespaces 00:08:58.356 ================= 00:08:58.356 Namespace ID:1 00:08:58.356 Error Recovery Timeout: Unlimited 00:08:58.356 Command Set Identifier: NVM (00h) 00:08:58.356 Deallocate: Supported 00:08:58.356 Deallocated/Unwritten Error: Supported 00:08:58.356 Deallocated Read Value: All 0x00 00:08:58.356 Deallocate in Write Zeroes: Not Supported 00:08:58.356 Deallocated Guard Field: 0xFFFF 00:08:58.356 Flush: Supported 00:08:58.356 Reservation: Not Supported 00:08:58.356 Metadata Transferred as: Separate Metadata Buffer 00:08:58.356 Namespace Sharing Capabilities: Private 00:08:58.356 Size (in LBAs): 1548666 (5GiB) 00:08:58.356 Capacity (in LBAs): 1548666 (5GiB) 00:08:58.356 Utilization (in LBAs): 1548666 (5GiB) 00:08:58.356 Thin Provisioning: Not Supported 00:08:58.356 Per-NS Atomic Units: No 00:08:58.356 Maximum Single Source Range Length: 128 00:08:58.356 Maximum Copy Length: 128 00:08:58.356 Maximum Source Range Count: 128 00:08:58.356 NGUID/EUI64 Never Reused: No 00:08:58.356 Namespace Write Protected: No 00:08:58.356 Number of LBA Formats: 8 00:08:58.356 Current LBA Format: LBA Format #07 00:08:58.356 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.356 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.356 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.356 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.356 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.356 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.356 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.356 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.356 00:08:58.356 NVM Specific Namespace Data 00:08:58.356 =========================== 00:08:58.356 Logical Block Storage Tag Mask: 0 00:08:58.356 Protection Information Capabilities: 00:08:58.356 16b Guard Protection Information Storage Tag Support: No 00:08:58.356 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.357 Storage Tag Check Read Support: No 00:08:58.357 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.357 ===================================================== 00:08:58.357 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.357 ===================================================== 00:08:58.357 Controller Capabilities/Features 00:08:58.357 ================================ 00:08:58.357 Vendor ID: 1b36 00:08:58.357 Subsystem Vendor ID: 1af4 00:08:58.357 Serial Number: 12341 00:08:58.357 Model Number: QEMU NVMe Ctrl 00:08:58.357 Firmware Version: 8.0.0 00:08:58.357 Recommended Arb Burst: 6 00:08:58.357 IEEE OUI Identifier: 00 54 52 00:08:58.357 Multi-path I/O 00:08:58.357 May have multiple subsystem ports: No 00:08:58.357 May have multiple controllers: No 00:08:58.357 Associated with SR-IOV VF: No 00:08:58.357 Max Data Transfer Size: 524288 00:08:58.357 Max Number of Namespaces: 256 00:08:58.357 Max Number of I/O Queues: 64 00:08:58.357 NVMe Specification Version (VS): 1.4 00:08:58.357 NVMe Specification Version (Identify): 1.4 00:08:58.357 Maximum Queue Entries: 2048 00:08:58.357 Contiguous Queues Required: Yes 00:08:58.357 Arbitration Mechanisms Supported 00:08:58.357 Weighted Round Robin: Not Supported 00:08:58.357 Vendor Specific: Not Supported 00:08:58.357 Reset Timeout: 7500 ms 00:08:58.357 Doorbell Stride: 4 bytes 00:08:58.357 NVM Subsystem Reset: Not Supported 00:08:58.357 Command Sets Supported 00:08:58.357 NVM Command Set: Supported 00:08:58.357 Boot Partition: Not Supported 00:08:58.357 Memory Page Size Minimum: 4096 bytes 00:08:58.357 Memory Page Size Maximum: 65536 bytes 00:08:58.357 Persistent Memory Region: Not Supported 00:08:58.357 Optional Asynchronous Events Supported 00:08:58.357 Namespace Attribute Notices: Supported 00:08:58.357 Firmware Activation Notices: Not Supported 00:08:58.357 ANA Change Notices: Not Supported 00:08:58.357 PLE Aggregate Log Change Notices: Not Supported 00:08:58.357 LBA Status Info Alert Notices: Not Supported 00:08:58.357 EGE Aggregate Log Change Notices: Not Supported 00:08:58.357 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.357 Zone Descriptor Change Notices: Not Supported 00:08:58.357 Discovery Log Change Notices: Not Supported 00:08:58.357 Controller Attributes 00:08:58.357 128-bit Host Identifier: Not Supported 00:08:58.357 Non-Operational Permissive Mode: Not Supported 00:08:58.357 NVM Sets: Not Supported 00:08:58.357 Read Recovery Levels: Not Supported 00:08:58.357 Endurance Groups: Not Supported 00:08:58.357 Predictable Latency Mode: Not Supported 00:08:58.357 Traffic Based Keep ALive: Not Supported 00:08:58.357 Namespace Granularity: Not Supported 00:08:58.357 SQ Associations: Not Supported 00:08:58.357 UUID List: Not Supported 00:08:58.357 Multi-Domain Subsystem: Not Supported 00:08:58.357 Fixed Capacity Management: Not Supported 00:08:58.357 Variable Capacity Management: Not Supported 00:08:58.357 Delete Endurance Group: Not Supported 00:08:58.357 Delete NVM Set: Not Supported 00:08:58.357 Extended LBA Formats Supported: Supported 00:08:58.357 Flexible Data Placement Supported: Not Supported 00:08:58.357 00:08:58.357 Controller Memory Buffer Support 00:08:58.357 ================================ 00:08:58.357 Supported: No 00:08:58.357 00:08:58.357 Persistent Memory Region Support 00:08:58.357 ================================ 00:08:58.357 Supported: No 00:08:58.357 00:08:58.357 Admin Command Set Attributes 00:08:58.357 ============================ 00:08:58.357 Security Send/Receive: Not Supported 00:08:58.357 Format NVM: Supported 00:08:58.357 Firmware Activate/Download: Not Supported 00:08:58.357 Namespace Management: Supported 00:08:58.357 Device Self-Test: Not Supported 00:08:58.357 Directives: Supported 00:08:58.357 NVMe-MI: Not Supported 00:08:58.357 Virtualization Management: Not Supported 00:08:58.357 Doorbell Buffer Config: Supported 00:08:58.357 Get LBA Status Capability: Not Supported 00:08:58.357 Command & Feature Lockdown Capability: Not Supported 00:08:58.357 Abort Command Limit: 4 00:08:58.357 Async Event Request Limit: 4 00:08:58.357 Number of Firmware Slots: N/A 00:08:58.357 Firmware Slot 1 Read-Only: N/A 00:08:58.357 Firmware Activation Without Reset: N/A 00:08:58.357 Multiple Update Detection Support: N/A 00:08:58.357 Firmware Update Granularity: No Information Provided 00:08:58.357 Per-Namespace SMART Log: Yes 00:08:58.357 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.357 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:58.357 Command Effects Log Page: Supported 00:08:58.357 Get Log Page Extended Data: Supported 00:08:58.357 Telemetry Log Pages: Not Supported 00:08:58.357 Persistent Event Log Pages: Not Supported 00:08:58.357 Supported Log Pages Log Page: May Support 00:08:58.357 Commands Supported & Effects Log Page: Not Supported 00:08:58.357 Feature Identifiers & Effects Log Page:May Support 00:08:58.357 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.357 Data Area 4 for Telemetry Log: Not Supported 00:08:58.357 Error Log Page Entries Supported: 1 00:08:58.357 Keep Alive: Not Supported 00:08:58.357 00:08:58.357 NVM Command Set Attributes 00:08:58.357 ========================== 00:08:58.357 Submission Queue Entry Size 00:08:58.357 Max: 64 00:08:58.357 Min: 64 00:08:58.357 Completion Queue Entry Size 00:08:58.357 Max: 16 00:08:58.357 Min: 16 00:08:58.357 Number of Namespaces: 256 00:08:58.357 Compare Command: Supported 00:08:58.357 Write Uncorrectable Command: Not Supported 00:08:58.357 Dataset Management Command: Supported 00:08:58.357 Write Zeroes Command: Supported 00:08:58.357 Set Features Save Field: Supported 00:08:58.357 Reservations: Not Supported 00:08:58.357 Timestamp: Supported 00:08:58.357 Copy: Supported 00:08:58.357 Volatile Write Cache: Present 00:08:58.357 Atomic Write Unit (Normal): 1 00:08:58.357 Atomic Write Unit (PFail): 1 00:08:58.357 Atomic Compare & Write Unit: 1 00:08:58.357 Fused Compare & Write: Not Supported 00:08:58.357 Scatter-Gather List 00:08:58.357 SGL Command Set: Supported 00:08:58.357 SGL Keyed: Not Supported 00:08:58.357 SGL Bit Bucket Descriptor: Not Supported 00:08:58.357 SGL Metadata Pointer: Not Supported 00:08:58.357 Oversized SGL: Not Supported 00:08:58.357 SGL Metadata Address: Not Supported 00:08:58.357 SGL Offset: Not Supported 00:08:58.357 Transport SGL Data Block: Not Supported 00:08:58.357 Replay Protected Memory Block: Not Supported 00:08:58.357 00:08:58.357 Firmware Slot Information 00:08:58.357 ========================= 00:08:58.357 Active slot: 1 00:08:58.357 Slot 1 Firmware Revision: 1.0 00:08:58.357 00:08:58.357 00:08:58.357 Commands Supported and Effects 00:08:58.357 ============================== 00:08:58.357 Admin Commands 00:08:58.357 -------------- 00:08:58.357 Delete I/O Submission Queue (00h): Supported 00:08:58.357 Create I/O Submission Queue (01h): Supported 00:08:58.357 Get Log Page (02h): Supported 00:08:58.357 Delete I/O Completion Queue (04h): Supported 00:08:58.357 Create I/O Completion Queue (05h): Supported 00:08:58.357 Identify (06h): Supported 00:08:58.357 Abort (08h): Supported 00:08:58.357 Set Features (09h): Supported 00:08:58.357 Get Features (0Ah): Supported 00:08:58.357 Asynchronous Event Request (0Ch): Supported 00:08:58.357 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.357 Directive Send (19h): Supported 00:08:58.357 Directive Receive (1Ah): Supported 00:08:58.357 Virtualization Management (1Ch): Supported 00:08:58.357 Doorbell Buffer Config (7Ch): Supported 00:08:58.357 Format NVM (80h): Supported LBA-Change 00:08:58.357 I/O Commands 00:08:58.357 ------------ 00:08:58.357 Flush (00h): Supported LBA-Change 00:08:58.357 Write (01h): Supported LBA-Change 00:08:58.357 Read (02h): Supported 00:08:58.357 Compare (05h): Supported 00:08:58.357 Write Zeroes (08h): Supported LBA-Change 00:08:58.357 Dataset Management (09h): Supported LBA-Change 00:08:58.357 Unknown (0Ch): Supported 00:08:58.357 Unknown (12h): Supported 00:08:58.357 Copy (19h): Supported LBA-Change 00:08:58.357 Unknown (1Dh): Supported LBA-Change 00:08:58.357 00:08:58.357 Error Log 00:08:58.357 ========= 00:08:58.357 00:08:58.357 Arbitration 00:08:58.357 =========== 00:08:58.357 Arbitration Burst: no limit 00:08:58.357 00:08:58.357 Power Management 00:08:58.357 ================ 00:08:58.357 Number of Power States: 1 00:08:58.357 Current Power State: Power State #0 00:08:58.357 Power State #0: 00:08:58.357 Max Power: 25.00 W 00:08:58.357 Non-Operational State: Operational 00:08:58.357 Entry Latency: 16 microseconds 00:08:58.358 Exit Latency: 4 microseconds 00:08:58.358 Relative Read Throughput: 0 00:08:58.358 Relative Read Latency: 0 00:08:58.358 Relative Write Throughput: 0 00:08:58.358 Relative Write Latency: 0 00:08:58.358 Idle Power: Not Reported 00:08:58.358 Active Power: Not Reported 00:08:58.358 Non-Operational Permissive Mode: Not Supported 00:08:58.358 00:08:58.358 Health Information 00:08:58.358 ================== 00:08:58.358 Critical Warnings: 00:08:58.358 Available Spare Space: OK 00:08:58.358 Temperature: [2024-12-11 13:06:49.856566] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 65459 terminated unexpected 00:08:58.358 OK 00:08:58.358 Device Reliability: OK 00:08:58.358 Read Only: No 00:08:58.358 Volatile Memory Backup: OK 00:08:58.358 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.358 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.358 Available Spare: 0% 00:08:58.358 Available Spare Threshold: 0% 00:08:58.358 Life Percentage Used: 0% 00:08:58.358 Data Units Read: 1140 00:08:58.358 Data Units Written: 1007 00:08:58.358 Host Read Commands: 49944 00:08:58.358 Host Write Commands: 48735 00:08:58.358 Controller Busy Time: 0 minutes 00:08:58.358 Power Cycles: 0 00:08:58.358 Power On Hours: 0 hours 00:08:58.358 Unsafe Shutdowns: 0 00:08:58.358 Unrecoverable Media Errors: 0 00:08:58.358 Lifetime Error Log Entries: 0 00:08:58.358 Warning Temperature Time: 0 minutes 00:08:58.358 Critical Temperature Time: 0 minutes 00:08:58.358 00:08:58.358 Number of Queues 00:08:58.358 ================ 00:08:58.358 Number of I/O Submission Queues: 64 00:08:58.358 Number of I/O Completion Queues: 64 00:08:58.358 00:08:58.358 ZNS Specific Controller Data 00:08:58.358 ============================ 00:08:58.358 Zone Append Size Limit: 0 00:08:58.358 00:08:58.358 00:08:58.358 Active Namespaces 00:08:58.358 ================= 00:08:58.358 Namespace ID:1 00:08:58.358 Error Recovery Timeout: Unlimited 00:08:58.358 Command Set Identifier: NVM (00h) 00:08:58.358 Deallocate: Supported 00:08:58.358 Deallocated/Unwritten Error: Supported 00:08:58.358 Deallocated Read Value: All 0x00 00:08:58.358 Deallocate in Write Zeroes: Not Supported 00:08:58.358 Deallocated Guard Field: 0xFFFF 00:08:58.358 Flush: Supported 00:08:58.358 Reservation: Not Supported 00:08:58.358 Namespace Sharing Capabilities: Private 00:08:58.358 Size (in LBAs): 1310720 (5GiB) 00:08:58.358 Capacity (in LBAs): 1310720 (5GiB) 00:08:58.358 Utilization (in LBAs): 1310720 (5GiB) 00:08:58.358 Thin Provisioning: Not Supported 00:08:58.358 Per-NS Atomic Units: No 00:08:58.358 Maximum Single Source Range Length: 128 00:08:58.358 Maximum Copy Length: 128 00:08:58.358 Maximum Source Range Count: 128 00:08:58.358 NGUID/EUI64 Never Reused: No 00:08:58.358 Namespace Write Protected: No 00:08:58.358 Number of LBA Formats: 8 00:08:58.358 Current LBA Format: LBA Format #04 00:08:58.358 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.358 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.358 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.358 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.358 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.358 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.358 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.358 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.358 00:08:58.358 NVM Specific Namespace Data 00:08:58.358 =========================== 00:08:58.358 Logical Block Storage Tag Mask: 0 00:08:58.358 Protection Information Capabilities: 00:08:58.358 16b Guard Protection Information Storage Tag Support: No 00:08:58.358 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.358 Storage Tag Check Read Support: No 00:08:58.358 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.358 ===================================================== 00:08:58.358 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.358 ===================================================== 00:08:58.358 Controller Capabilities/Features 00:08:58.358 ================================ 00:08:58.358 Vendor ID: 1b36 00:08:58.358 Subsystem Vendor ID: 1af4 00:08:58.358 Serial Number: 12343 00:08:58.358 Model Number: QEMU NVMe Ctrl 00:08:58.358 Firmware Version: 8.0.0 00:08:58.358 Recommended Arb Burst: 6 00:08:58.358 IEEE OUI Identifier: 00 54 52 00:08:58.358 Multi-path I/O 00:08:58.358 May have multiple subsystem ports: No 00:08:58.358 May have multiple controllers: Yes 00:08:58.358 Associated with SR-IOV VF: No 00:08:58.358 Max Data Transfer Size: 524288 00:08:58.358 Max Number of Namespaces: 256 00:08:58.358 Max Number of I/O Queues: 64 00:08:58.358 NVMe Specification Version (VS): 1.4 00:08:58.358 NVMe Specification Version (Identify): 1.4 00:08:58.358 Maximum Queue Entries: 2048 00:08:58.358 Contiguous Queues Required: Yes 00:08:58.358 Arbitration Mechanisms Supported 00:08:58.358 Weighted Round Robin: Not Supported 00:08:58.358 Vendor Specific: Not Supported 00:08:58.358 Reset Timeout: 7500 ms 00:08:58.358 Doorbell Stride: 4 bytes 00:08:58.358 NVM Subsystem Reset: Not Supported 00:08:58.358 Command Sets Supported 00:08:58.358 NVM Command Set: Supported 00:08:58.358 Boot Partition: Not Supported 00:08:58.358 Memory Page Size Minimum: 4096 bytes 00:08:58.358 Memory Page Size Maximum: 65536 bytes 00:08:58.358 Persistent Memory Region: Not Supported 00:08:58.358 Optional Asynchronous Events Supported 00:08:58.358 Namespace Attribute Notices: Supported 00:08:58.358 Firmware Activation Notices: Not Supported 00:08:58.358 ANA Change Notices: Not Supported 00:08:58.358 PLE Aggregate Log Change Notices: Not Supported 00:08:58.358 LBA Status Info Alert Notices: Not Supported 00:08:58.358 EGE Aggregate Log Change Notices: Not Supported 00:08:58.358 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.358 Zone Descriptor Change Notices: Not Supported 00:08:58.358 Discovery Log Change Notices: Not Supported 00:08:58.358 Controller Attributes 00:08:58.358 128-bit Host Identifier: Not Supported 00:08:58.358 Non-Operational Permissive Mode: Not Supported 00:08:58.358 NVM Sets: Not Supported 00:08:58.358 Read Recovery Levels: Not Supported 00:08:58.358 Endurance Groups: Supported 00:08:58.358 Predictable Latency Mode: Not Supported 00:08:58.358 Traffic Based Keep ALive: Not Supported 00:08:58.358 Namespace Granularity: Not Supported 00:08:58.358 SQ Associations: Not Supported 00:08:58.358 UUID List: Not Supported 00:08:58.358 Multi-Domain Subsystem: Not Supported 00:08:58.358 Fixed Capacity Management: Not Supported 00:08:58.358 Variable Capacity Management: Not Supported 00:08:58.358 Delete Endurance Group: Not Supported 00:08:58.358 Delete NVM Set: Not Supported 00:08:58.358 Extended LBA Formats Supported: Supported 00:08:58.358 Flexible Data Placement Supported: Supported 00:08:58.358 00:08:58.358 Controller Memory Buffer Support 00:08:58.358 ================================ 00:08:58.358 Supported: No 00:08:58.358 00:08:58.358 Persistent Memory Region Support 00:08:58.358 ================================ 00:08:58.358 Supported: No 00:08:58.358 00:08:58.358 Admin Command Set Attributes 00:08:58.358 ============================ 00:08:58.358 Security Send/Receive: Not Supported 00:08:58.358 Format NVM: Supported 00:08:58.358 Firmware Activate/Download: Not Supported 00:08:58.358 Namespace Management: Supported 00:08:58.358 Device Self-Test: Not Supported 00:08:58.358 Directives: Supported 00:08:58.358 NVMe-MI: Not Supported 00:08:58.358 Virtualization Management: Not Supported 00:08:58.358 Doorbell Buffer Config: Supported 00:08:58.358 Get LBA Status Capability: Not Supported 00:08:58.358 Command & Feature Lockdown Capability: Not Supported 00:08:58.358 Abort Command Limit: 4 00:08:58.358 Async Event Request Limit: 4 00:08:58.358 Number of Firmware Slots: N/A 00:08:58.358 Firmware Slot 1 Read-Only: N/A 00:08:58.358 Firmware Activation Without Reset: N/A 00:08:58.358 Multiple Update Detection Support: N/A 00:08:58.358 Firmware Update Granularity: No Information Provided 00:08:58.358 Per-Namespace SMART Log: Yes 00:08:58.358 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.358 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:58.358 Command Effects Log Page: Supported 00:08:58.358 Get Log Page Extended Data: Supported 00:08:58.358 Telemetry Log Pages: Not Supported 00:08:58.358 Persistent Event Log Pages: Not Supported 00:08:58.358 Supported Log Pages Log Page: May Support 00:08:58.358 Commands Supported & Effects Log Page: Not Supported 00:08:58.358 Feature Identifiers & Effects Log Page:May Support 00:08:58.358 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.358 Data Area 4 for Telemetry Log: Not Supported 00:08:58.358 Error Log Page Entries Supported: 1 00:08:58.358 Keep Alive: Not Supported 00:08:58.358 00:08:58.359 NVM Command Set Attributes 00:08:58.359 ========================== 00:08:58.359 Submission Queue Entry Size 00:08:58.359 Max: 64 00:08:58.359 Min: 64 00:08:58.359 Completion Queue Entry Size 00:08:58.359 Max: 16 00:08:58.359 Min: 16 00:08:58.359 Number of Namespaces: 256 00:08:58.359 Compare Command: Supported 00:08:58.359 Write Uncorrectable Command: Not Supported 00:08:58.359 Dataset Management Command: Supported 00:08:58.359 Write Zeroes Command: Supported 00:08:58.359 Set Features Save Field: Supported 00:08:58.359 Reservations: Not Supported 00:08:58.359 Timestamp: Supported 00:08:58.359 Copy: Supported 00:08:58.359 Volatile Write Cache: Present 00:08:58.359 Atomic Write Unit (Normal): 1 00:08:58.359 Atomic Write Unit (PFail): 1 00:08:58.359 Atomic Compare & Write Unit: 1 00:08:58.359 Fused Compare & Write: Not Supported 00:08:58.359 Scatter-Gather List 00:08:58.359 SGL Command Set: Supported 00:08:58.359 SGL Keyed: Not Supported 00:08:58.359 SGL Bit Bucket Descriptor: Not Supported 00:08:58.359 SGL Metadata Pointer: Not Supported 00:08:58.359 Oversized SGL: Not Supported 00:08:58.359 SGL Metadata Address: Not Supported 00:08:58.359 SGL Offset: Not Supported 00:08:58.359 Transport SGL Data Block: Not Supported 00:08:58.359 Replay Protected Memory Block: Not Supported 00:08:58.359 00:08:58.359 Firmware Slot Information 00:08:58.359 ========================= 00:08:58.359 Active slot: 1 00:08:58.359 Slot 1 Firmware Revision: 1.0 00:08:58.359 00:08:58.359 00:08:58.359 Commands Supported and Effects 00:08:58.359 ============================== 00:08:58.359 Admin Commands 00:08:58.359 -------------- 00:08:58.359 Delete I/O Submission Queue (00h): Supported 00:08:58.359 Create I/O Submission Queue (01h): Supported 00:08:58.359 Get Log Page (02h): Supported 00:08:58.359 Delete I/O Completion Queue (04h): Supported 00:08:58.359 Create I/O Completion Queue (05h): Supported 00:08:58.359 Identify (06h): Supported 00:08:58.359 Abort (08h): Supported 00:08:58.359 Set Features (09h): Supported 00:08:58.359 Get Features (0Ah): Supported 00:08:58.359 Asynchronous Event Request (0Ch): Supported 00:08:58.359 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.359 Directive Send (19h): Supported 00:08:58.359 Directive Receive (1Ah): Supported 00:08:58.359 Virtualization Management (1Ch): Supported 00:08:58.359 Doorbell Buffer Config (7Ch): Supported 00:08:58.359 Format NVM (80h): Supported LBA-Change 00:08:58.359 I/O Commands 00:08:58.359 ------------ 00:08:58.359 Flush (00h): Supported LBA-Change 00:08:58.359 Write (01h): Supported LBA-Change 00:08:58.359 Read (02h): Supported 00:08:58.359 Compare (05h): Supported 00:08:58.359 Write Zeroes (08h): Supported LBA-Change 00:08:58.359 Dataset Management (09h): Supported LBA-Change 00:08:58.359 Unknown (0Ch): Supported 00:08:58.359 Unknown (12h): Supported 00:08:58.359 Copy (19h): Supported LBA-Change 00:08:58.359 Unknown (1Dh): Supported LBA-Change 00:08:58.359 00:08:58.359 Error Log 00:08:58.359 ========= 00:08:58.359 00:08:58.359 Arbitration 00:08:58.359 =========== 00:08:58.359 Arbitration Burst: no limit 00:08:58.359 00:08:58.359 Power Management 00:08:58.359 ================ 00:08:58.359 Number of Power States: 1 00:08:58.359 Current Power State: Power State #0 00:08:58.359 Power State #0: 00:08:58.359 Max Power: 25.00 W 00:08:58.359 Non-Operational State: Operational 00:08:58.359 Entry Latency: 16 microseconds 00:08:58.359 Exit Latency: 4 microseconds 00:08:58.359 Relative Read Throughput: 0 00:08:58.359 Relative Read Latency: 0 00:08:58.359 Relative Write Throughput: 0 00:08:58.359 Relative Write Latency: 0 00:08:58.359 Idle Power: Not Reported 00:08:58.359 Active Power: Not Reported 00:08:58.359 Non-Operational Permissive Mode: Not Supported 00:08:58.359 00:08:58.359 Health Information 00:08:58.359 ================== 00:08:58.359 Critical Warnings: 00:08:58.359 Available Spare Space: OK 00:08:58.359 Temperature: OK 00:08:58.359 Device Reliability: OK 00:08:58.359 Read Only: No 00:08:58.359 Volatile Memory Backup: OK 00:08:58.359 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.359 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.359 Available Spare: 0% 00:08:58.359 Available Spare Threshold: 0% 00:08:58.359 Life Percentage Used: 0% 00:08:58.359 Data Units Read: 846 00:08:58.359 Data Units Written: 775 00:08:58.359 Host Read Commands: 34109 00:08:58.359 Host Write Commands: 33533 00:08:58.359 Controller Busy Time: 0 minutes 00:08:58.359 Power Cycles: 0 00:08:58.359 Power On Hours: 0 hours 00:08:58.359 Unsafe Shutdowns: 0 00:08:58.359 Unrecoverable Media Errors: 0 00:08:58.359 Lifetime Error Log Entries: 0 00:08:58.359 Warning Temperature Time: 0 minutes 00:08:58.359 Critical Temperature Time: 0 minutes 00:08:58.359 00:08:58.359 Number of Queues 00:08:58.359 ================ 00:08:58.359 Number of I/O Submission Queues: 64 00:08:58.359 Number of I/O Completion Queues: 64 00:08:58.359 00:08:58.359 ZNS Specific Controller Data 00:08:58.359 ============================ 00:08:58.359 Zone Append Size Limit: 0 00:08:58.359 00:08:58.359 00:08:58.359 Active Namespaces 00:08:58.359 ================= 00:08:58.359 Namespace ID:1 00:08:58.359 Error Recovery Timeout: Unlimited 00:08:58.359 Command Set Identifier: NVM (00h) 00:08:58.359 Deallocate: Supported 00:08:58.359 Deallocated/Unwritten Error: Supported 00:08:58.359 Deallocated Read Value: All 0x00 00:08:58.359 Deallocate in Write Zeroes: Not Supported 00:08:58.359 Deallocated Guard Field: 0xFFFF 00:08:58.359 Flush: Supported 00:08:58.359 Reservation: Not Supported 00:08:58.359 Namespace Sharing Capabilities: Multiple Controllers 00:08:58.359 Size (in LBAs): 262144 (1GiB) 00:08:58.359 Capacity (in LBAs): 262144 (1GiB) 00:08:58.359 Utilization (in LBAs): 262144 (1GiB) 00:08:58.359 Thin Provisioning: Not Supported 00:08:58.359 Per-NS Atomic Units: No 00:08:58.359 Maximum Single Source Range Length: 128 00:08:58.359 Maximum Copy Length: 128 00:08:58.359 Maximum Source Range Count: 128 00:08:58.359 NGUID/EUI64 Never Reused: No 00:08:58.359 Namespace Write Protected: No 00:08:58.359 Endurance group ID: 1 00:08:58.359 Number of LBA Formats: 8 00:08:58.359 Current LBA Format: LBA Format #04 00:08:58.359 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.359 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.359 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.359 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.359 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.359 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.359 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.359 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.359 00:08:58.359 Get Feature FDP: 00:08:58.359 ================ 00:08:58.359 Enabled: Yes 00:08:58.359 FDP configuration index: 0 00:08:58.359 00:08:58.359 FDP configurations log page 00:08:58.359 =========================== 00:08:58.359 Number of FDP configurations: 1 00:08:58.359 Version: 0 00:08:58.359 Size: 112 00:08:58.359 FDP Configuration Descriptor: 0 00:08:58.359 Descriptor Size: 96 00:08:58.359 Reclaim Group Identifier format: 2 00:08:58.359 FDP Volatile Write Cache: Not Present 00:08:58.359 FDP Configuration: Valid 00:08:58.359 Vendor Specific Size: 0 00:08:58.359 Number of Reclaim Groups: 2 00:08:58.359 Number of Recalim Unit Handles: 8 00:08:58.359 Max Placement Identifiers: 128 00:08:58.359 Number of Namespaces Suppprted: 256 00:08:58.359 Reclaim unit Nominal Size: 6000000 bytes 00:08:58.359 Estimated Reclaim Unit Time Limit: Not Reported 00:08:58.359 RUH Desc #000: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #001: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #002: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #003: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #004: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #005: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #006: RUH Type: Initially Isolated 00:08:58.359 RUH Desc #007: RUH Type: Initially Isolated 00:08:58.359 00:08:58.359 FDP reclaim unit handle usage log page 00:08:58.359 ====================================== 00:08:58.359 Number of Reclaim Unit Handles: 8 00:08:58.359 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:58.359 RUH Usage Desc #001: RUH Attributes: Unused 00:08:58.359 RUH Usage Desc #002: RUH Attributes: Unused 00:08:58.359 RUH Usage Desc #003: RUH Attributes: Unused 00:08:58.359 RUH Usage Desc #004: RUH Attributes: Unused 00:08:58.359 RUH Usage Desc #005: RUH Attributes: Unused 00:08:58.359 RUH Usage Desc #006: RUH Attributes: Unused 00:08:58.359 RUH Usage Desc #007: RUH Attributes: Unused 00:08:58.359 00:08:58.359 FDP statistics log page 00:08:58.359 ======================= 00:08:58.359 Host bytes with metadata written: 505257984 00:08:58.359 Med[2024-12-11 13:06:49.858020] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 65459 terminated unexpected 00:08:58.359 ia bytes with metadata written: 505315328 00:08:58.359 Media bytes erased: 0 00:08:58.360 00:08:58.360 FDP events log page 00:08:58.360 =================== 00:08:58.360 Number of FDP events: 0 00:08:58.360 00:08:58.360 NVM Specific Namespace Data 00:08:58.360 =========================== 00:08:58.360 Logical Block Storage Tag Mask: 0 00:08:58.360 Protection Information Capabilities: 00:08:58.360 16b Guard Protection Information Storage Tag Support: No 00:08:58.360 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.360 Storage Tag Check Read Support: No 00:08:58.360 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.360 ===================================================== 00:08:58.360 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.360 ===================================================== 00:08:58.360 Controller Capabilities/Features 00:08:58.360 ================================ 00:08:58.360 Vendor ID: 1b36 00:08:58.360 Subsystem Vendor ID: 1af4 00:08:58.360 Serial Number: 12342 00:08:58.360 Model Number: QEMU NVMe Ctrl 00:08:58.360 Firmware Version: 8.0.0 00:08:58.360 Recommended Arb Burst: 6 00:08:58.360 IEEE OUI Identifier: 00 54 52 00:08:58.360 Multi-path I/O 00:08:58.360 May have multiple subsystem ports: No 00:08:58.360 May have multiple controllers: No 00:08:58.360 Associated with SR-IOV VF: No 00:08:58.360 Max Data Transfer Size: 524288 00:08:58.360 Max Number of Namespaces: 256 00:08:58.360 Max Number of I/O Queues: 64 00:08:58.360 NVMe Specification Version (VS): 1.4 00:08:58.360 NVMe Specification Version (Identify): 1.4 00:08:58.360 Maximum Queue Entries: 2048 00:08:58.360 Contiguous Queues Required: Yes 00:08:58.360 Arbitration Mechanisms Supported 00:08:58.360 Weighted Round Robin: Not Supported 00:08:58.360 Vendor Specific: Not Supported 00:08:58.360 Reset Timeout: 7500 ms 00:08:58.360 Doorbell Stride: 4 bytes 00:08:58.360 NVM Subsystem Reset: Not Supported 00:08:58.360 Command Sets Supported 00:08:58.360 NVM Command Set: Supported 00:08:58.360 Boot Partition: Not Supported 00:08:58.360 Memory Page Size Minimum: 4096 bytes 00:08:58.360 Memory Page Size Maximum: 65536 bytes 00:08:58.360 Persistent Memory Region: Not Supported 00:08:58.360 Optional Asynchronous Events Supported 00:08:58.360 Namespace Attribute Notices: Supported 00:08:58.360 Firmware Activation Notices: Not Supported 00:08:58.360 ANA Change Notices: Not Supported 00:08:58.360 PLE Aggregate Log Change Notices: Not Supported 00:08:58.360 LBA Status Info Alert Notices: Not Supported 00:08:58.360 EGE Aggregate Log Change Notices: Not Supported 00:08:58.360 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.360 Zone Descriptor Change Notices: Not Supported 00:08:58.360 Discovery Log Change Notices: Not Supported 00:08:58.360 Controller Attributes 00:08:58.360 128-bit Host Identifier: Not Supported 00:08:58.360 Non-Operational Permissive Mode: Not Supported 00:08:58.360 NVM Sets: Not Supported 00:08:58.360 Read Recovery Levels: Not Supported 00:08:58.360 Endurance Groups: Not Supported 00:08:58.360 Predictable Latency Mode: Not Supported 00:08:58.360 Traffic Based Keep ALive: Not Supported 00:08:58.360 Namespace Granularity: Not Supported 00:08:58.360 SQ Associations: Not Supported 00:08:58.360 UUID List: Not Supported 00:08:58.360 Multi-Domain Subsystem: Not Supported 00:08:58.360 Fixed Capacity Management: Not Supported 00:08:58.360 Variable Capacity Management: Not Supported 00:08:58.360 Delete Endurance Group: Not Supported 00:08:58.360 Delete NVM Set: Not Supported 00:08:58.360 Extended LBA Formats Supported: Supported 00:08:58.360 Flexible Data Placement Supported: Not Supported 00:08:58.360 00:08:58.360 Controller Memory Buffer Support 00:08:58.360 ================================ 00:08:58.360 Supported: No 00:08:58.360 00:08:58.360 Persistent Memory Region Support 00:08:58.360 ================================ 00:08:58.360 Supported: No 00:08:58.360 00:08:58.360 Admin Command Set Attributes 00:08:58.360 ============================ 00:08:58.360 Security Send/Receive: Not Supported 00:08:58.360 Format NVM: Supported 00:08:58.360 Firmware Activate/Download: Not Supported 00:08:58.360 Namespace Management: Supported 00:08:58.360 Device Self-Test: Not Supported 00:08:58.360 Directives: Supported 00:08:58.360 NVMe-MI: Not Supported 00:08:58.360 Virtualization Management: Not Supported 00:08:58.360 Doorbell Buffer Config: Supported 00:08:58.360 Get LBA Status Capability: Not Supported 00:08:58.360 Command & Feature Lockdown Capability: Not Supported 00:08:58.360 Abort Command Limit: 4 00:08:58.360 Async Event Request Limit: 4 00:08:58.360 Number of Firmware Slots: N/A 00:08:58.360 Firmware Slot 1 Read-Only: N/A 00:08:58.360 Firmware Activation Without Reset: N/A 00:08:58.360 Multiple Update Detection Support: N/A 00:08:58.360 Firmware Update Granularity: No Information Provided 00:08:58.360 Per-Namespace SMART Log: Yes 00:08:58.360 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.360 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:58.360 Command Effects Log Page: Supported 00:08:58.360 Get Log Page Extended Data: Supported 00:08:58.360 Telemetry Log Pages: Not Supported 00:08:58.360 Persistent Event Log Pages: Not Supported 00:08:58.360 Supported Log Pages Log Page: May Support 00:08:58.360 Commands Supported & Effects Log Page: Not Supported 00:08:58.360 Feature Identifiers & Effects Log Page:May Support 00:08:58.360 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.360 Data Area 4 for Telemetry Log: Not Supported 00:08:58.360 Error Log Page Entries Supported: 1 00:08:58.360 Keep Alive: Not Supported 00:08:58.360 00:08:58.360 NVM Command Set Attributes 00:08:58.360 ========================== 00:08:58.360 Submission Queue Entry Size 00:08:58.360 Max: 64 00:08:58.360 Min: 64 00:08:58.360 Completion Queue Entry Size 00:08:58.360 Max: 16 00:08:58.360 Min: 16 00:08:58.360 Number of Namespaces: 256 00:08:58.360 Compare Command: Supported 00:08:58.360 Write Uncorrectable Command: Not Supported 00:08:58.360 Dataset Management Command: Supported 00:08:58.360 Write Zeroes Command: Supported 00:08:58.360 Set Features Save Field: Supported 00:08:58.360 Reservations: Not Supported 00:08:58.360 Timestamp: Supported 00:08:58.360 Copy: Supported 00:08:58.360 Volatile Write Cache: Present 00:08:58.360 Atomic Write Unit (Normal): 1 00:08:58.360 Atomic Write Unit (PFail): 1 00:08:58.360 Atomic Compare & Write Unit: 1 00:08:58.360 Fused Compare & Write: Not Supported 00:08:58.360 Scatter-Gather List 00:08:58.360 SGL Command Set: Supported 00:08:58.360 SGL Keyed: Not Supported 00:08:58.360 SGL Bit Bucket Descriptor: Not Supported 00:08:58.360 SGL Metadata Pointer: Not Supported 00:08:58.360 Oversized SGL: Not Supported 00:08:58.360 SGL Metadata Address: Not Supported 00:08:58.360 SGL Offset: Not Supported 00:08:58.360 Transport SGL Data Block: Not Supported 00:08:58.360 Replay Protected Memory Block: Not Supported 00:08:58.360 00:08:58.360 Firmware Slot Information 00:08:58.360 ========================= 00:08:58.360 Active slot: 1 00:08:58.360 Slot 1 Firmware Revision: 1.0 00:08:58.360 00:08:58.360 00:08:58.360 Commands Supported and Effects 00:08:58.360 ============================== 00:08:58.360 Admin Commands 00:08:58.361 -------------- 00:08:58.361 Delete I/O Submission Queue (00h): Supported 00:08:58.361 Create I/O Submission Queue (01h): Supported 00:08:58.361 Get Log Page (02h): Supported 00:08:58.361 Delete I/O Completion Queue (04h): Supported 00:08:58.361 Create I/O Completion Queue (05h): Supported 00:08:58.361 Identify (06h): Supported 00:08:58.361 Abort (08h): Supported 00:08:58.361 Set Features (09h): Supported 00:08:58.361 Get Features (0Ah): Supported 00:08:58.361 Asynchronous Event Request (0Ch): Supported 00:08:58.361 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.361 Directive Send (19h): Supported 00:08:58.361 Directive Receive (1Ah): Supported 00:08:58.361 Virtualization Management (1Ch): Supported 00:08:58.361 Doorbell Buffer Config (7Ch): Supported 00:08:58.361 Format NVM (80h): Supported LBA-Change 00:08:58.361 I/O Commands 00:08:58.361 ------------ 00:08:58.361 Flush (00h): Supported LBA-Change 00:08:58.361 Write (01h): Supported LBA-Change 00:08:58.361 Read (02h): Supported 00:08:58.361 Compare (05h): Supported 00:08:58.361 Write Zeroes (08h): Supported LBA-Change 00:08:58.361 Dataset Management (09h): Supported LBA-Change 00:08:58.361 Unknown (0Ch): Supported 00:08:58.361 Unknown (12h): Supported 00:08:58.361 Copy (19h): Supported LBA-Change 00:08:58.361 Unknown (1Dh): Supported LBA-Change 00:08:58.361 00:08:58.361 Error Log 00:08:58.361 ========= 00:08:58.361 00:08:58.361 Arbitration 00:08:58.361 =========== 00:08:58.361 Arbitration Burst: no limit 00:08:58.361 00:08:58.361 Power Management 00:08:58.361 ================ 00:08:58.361 Number of Power States: 1 00:08:58.361 Current Power State: Power State #0 00:08:58.361 Power State #0: 00:08:58.361 Max Power: 25.00 W 00:08:58.361 Non-Operational State: Operational 00:08:58.361 Entry Latency: 16 microseconds 00:08:58.361 Exit Latency: 4 microseconds 00:08:58.361 Relative Read Throughput: 0 00:08:58.361 Relative Read Latency: 0 00:08:58.361 Relative Write Throughput: 0 00:08:58.361 Relative Write Latency: 0 00:08:58.361 Idle Power: Not Reported 00:08:58.361 Active Power: Not Reported 00:08:58.361 Non-Operational Permissive Mode: Not Supported 00:08:58.361 00:08:58.361 Health Information 00:08:58.361 ================== 00:08:58.361 Critical Warnings: 00:08:58.361 Available Spare Space: OK 00:08:58.361 Temperature: OK 00:08:58.361 Device Reliability: OK 00:08:58.361 Read Only: No 00:08:58.361 Volatile Memory Backup: OK 00:08:58.361 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.361 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.361 Available Spare: 0% 00:08:58.361 Available Spare Threshold: 0% 00:08:58.361 Life Percentage Used: 0% 00:08:58.361 Data Units Read: 2339 00:08:58.361 Data Units Written: 2127 00:08:58.361 Host Read Commands: 100777 00:08:58.361 Host Write Commands: 99046 00:08:58.361 Controller Busy Time: 0 minutes 00:08:58.361 Power Cycles: 0 00:08:58.361 Power On Hours: 0 hours 00:08:58.361 Unsafe Shutdowns: 0 00:08:58.361 Unrecoverable Media Errors: 0 00:08:58.361 Lifetime Error Log Entries: 0 00:08:58.361 Warning Temperature Time: 0 minutes 00:08:58.361 Critical Temperature Time: 0 minutes 00:08:58.361 00:08:58.361 Number of Queues 00:08:58.361 ================ 00:08:58.361 Number of I/O Submission Queues: 64 00:08:58.361 Number of I/O Completion Queues: 64 00:08:58.361 00:08:58.361 ZNS Specific Controller Data 00:08:58.361 ============================ 00:08:58.361 Zone Append Size Limit: 0 00:08:58.361 00:08:58.361 00:08:58.361 Active Namespaces 00:08:58.361 ================= 00:08:58.361 Namespace ID:1 00:08:58.361 Error Recovery Timeout: Unlimited 00:08:58.361 Command Set Identifier: NVM (00h) 00:08:58.361 Deallocate: Supported 00:08:58.361 Deallocated/Unwritten Error: Supported 00:08:58.361 Deallocated Read Value: All 0x00 00:08:58.361 Deallocate in Write Zeroes: Not Supported 00:08:58.361 Deallocated Guard Field: 0xFFFF 00:08:58.361 Flush: Supported 00:08:58.361 Reservation: Not Supported 00:08:58.361 Namespace Sharing Capabilities: Private 00:08:58.361 Size (in LBAs): 1048576 (4GiB) 00:08:58.361 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.361 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.361 Thin Provisioning: Not Supported 00:08:58.361 Per-NS Atomic Units: No 00:08:58.361 Maximum Single Source Range Length: 128 00:08:58.361 Maximum Copy Length: 128 00:08:58.361 Maximum Source Range Count: 128 00:08:58.361 NGUID/EUI64 Never Reused: No 00:08:58.361 Namespace Write Protected: No 00:08:58.361 Number of LBA Formats: 8 00:08:58.361 Current LBA Format: LBA Format #04 00:08:58.361 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.361 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.361 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.361 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.361 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.361 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.361 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.361 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.361 00:08:58.361 NVM Specific Namespace Data 00:08:58.361 =========================== 00:08:58.361 Logical Block Storage Tag Mask: 0 00:08:58.361 Protection Information Capabilities: 00:08:58.361 16b Guard Protection Information Storage Tag Support: No 00:08:58.361 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.361 Storage Tag Check Read Support: No 00:08:58.361 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Namespace ID:2 00:08:58.361 Error Recovery Timeout: Unlimited 00:08:58.361 Command Set Identifier: NVM (00h) 00:08:58.361 Deallocate: Supported 00:08:58.361 Deallocated/Unwritten Error: Supported 00:08:58.361 Deallocated Read Value: All 0x00 00:08:58.361 Deallocate in Write Zeroes: Not Supported 00:08:58.361 Deallocated Guard Field: 0xFFFF 00:08:58.361 Flush: Supported 00:08:58.361 Reservation: Not Supported 00:08:58.361 Namespace Sharing Capabilities: Private 00:08:58.361 Size (in LBAs): 1048576 (4GiB) 00:08:58.361 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.361 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.361 Thin Provisioning: Not Supported 00:08:58.361 Per-NS Atomic Units: No 00:08:58.361 Maximum Single Source Range Length: 128 00:08:58.361 Maximum Copy Length: 128 00:08:58.361 Maximum Source Range Count: 128 00:08:58.361 NGUID/EUI64 Never Reused: No 00:08:58.361 Namespace Write Protected: No 00:08:58.361 Number of LBA Formats: 8 00:08:58.361 Current LBA Format: LBA Format #04 00:08:58.361 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.361 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.361 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.361 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.361 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.361 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.361 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.361 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.361 00:08:58.361 NVM Specific Namespace Data 00:08:58.361 =========================== 00:08:58.361 Logical Block Storage Tag Mask: 0 00:08:58.361 Protection Information Capabilities: 00:08:58.361 16b Guard Protection Information Storage Tag Support: No 00:08:58.361 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.361 Storage Tag Check Read Support: No 00:08:58.361 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.361 Namespace ID:3 00:08:58.361 Error Recovery Timeout: Unlimited 00:08:58.361 Command Set Identifier: NVM (00h) 00:08:58.361 Deallocate: Supported 00:08:58.361 Deallocated/Unwritten Error: Supported 00:08:58.361 Deallocated Read Value: All 0x00 00:08:58.361 Deallocate in Write Zeroes: Not Supported 00:08:58.361 Deallocated Guard Field: 0xFFFF 00:08:58.361 Flush: Supported 00:08:58.361 Reservation: Not Supported 00:08:58.361 Namespace Sharing Capabilities: Private 00:08:58.361 Size (in LBAs): 1048576 (4GiB) 00:08:58.361 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.361 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.362 Thin Provisioning: Not Supported 00:08:58.362 Per-NS Atomic Units: No 00:08:58.362 Maximum Single Source Range Length: 128 00:08:58.362 Maximum Copy Length: 128 00:08:58.362 Maximum Source Range Count: 128 00:08:58.362 NGUID/EUI64 Never Reused: No 00:08:58.362 Namespace Write Protected: No 00:08:58.362 Number of LBA Formats: 8 00:08:58.362 Current LBA Format: LBA Format #04 00:08:58.362 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.362 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.362 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.362 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.362 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.362 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.362 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.362 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.362 00:08:58.362 NVM Specific Namespace Data 00:08:58.362 =========================== 00:08:58.362 Logical Block Storage Tag Mask: 0 00:08:58.362 Protection Information Capabilities: 00:08:58.362 16b Guard Protection Information Storage Tag Support: No 00:08:58.362 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.362 Storage Tag Check Read Support: No 00:08:58.362 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.362 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:58.362 13:06:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:58.621 ===================================================== 00:08:58.622 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.622 ===================================================== 00:08:58.622 Controller Capabilities/Features 00:08:58.622 ================================ 00:08:58.622 Vendor ID: 1b36 00:08:58.622 Subsystem Vendor ID: 1af4 00:08:58.622 Serial Number: 12340 00:08:58.622 Model Number: QEMU NVMe Ctrl 00:08:58.622 Firmware Version: 8.0.0 00:08:58.622 Recommended Arb Burst: 6 00:08:58.622 IEEE OUI Identifier: 00 54 52 00:08:58.622 Multi-path I/O 00:08:58.622 May have multiple subsystem ports: No 00:08:58.622 May have multiple controllers: No 00:08:58.622 Associated with SR-IOV VF: No 00:08:58.622 Max Data Transfer Size: 524288 00:08:58.622 Max Number of Namespaces: 256 00:08:58.622 Max Number of I/O Queues: 64 00:08:58.622 NVMe Specification Version (VS): 1.4 00:08:58.622 NVMe Specification Version (Identify): 1.4 00:08:58.622 Maximum Queue Entries: 2048 00:08:58.622 Contiguous Queues Required: Yes 00:08:58.622 Arbitration Mechanisms Supported 00:08:58.622 Weighted Round Robin: Not Supported 00:08:58.622 Vendor Specific: Not Supported 00:08:58.622 Reset Timeout: 7500 ms 00:08:58.622 Doorbell Stride: 4 bytes 00:08:58.622 NVM Subsystem Reset: Not Supported 00:08:58.622 Command Sets Supported 00:08:58.622 NVM Command Set: Supported 00:08:58.622 Boot Partition: Not Supported 00:08:58.622 Memory Page Size Minimum: 4096 bytes 00:08:58.622 Memory Page Size Maximum: 65536 bytes 00:08:58.622 Persistent Memory Region: Not Supported 00:08:58.622 Optional Asynchronous Events Supported 00:08:58.622 Namespace Attribute Notices: Supported 00:08:58.622 Firmware Activation Notices: Not Supported 00:08:58.622 ANA Change Notices: Not Supported 00:08:58.622 PLE Aggregate Log Change Notices: Not Supported 00:08:58.622 LBA Status Info Alert Notices: Not Supported 00:08:58.622 EGE Aggregate Log Change Notices: Not Supported 00:08:58.622 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.622 Zone Descriptor Change Notices: Not Supported 00:08:58.622 Discovery Log Change Notices: Not Supported 00:08:58.622 Controller Attributes 00:08:58.622 128-bit Host Identifier: Not Supported 00:08:58.622 Non-Operational Permissive Mode: Not Supported 00:08:58.622 NVM Sets: Not Supported 00:08:58.622 Read Recovery Levels: Not Supported 00:08:58.622 Endurance Groups: Not Supported 00:08:58.622 Predictable Latency Mode: Not Supported 00:08:58.622 Traffic Based Keep ALive: Not Supported 00:08:58.622 Namespace Granularity: Not Supported 00:08:58.622 SQ Associations: Not Supported 00:08:58.622 UUID List: Not Supported 00:08:58.622 Multi-Domain Subsystem: Not Supported 00:08:58.622 Fixed Capacity Management: Not Supported 00:08:58.622 Variable Capacity Management: Not Supported 00:08:58.622 Delete Endurance Group: Not Supported 00:08:58.622 Delete NVM Set: Not Supported 00:08:58.622 Extended LBA Formats Supported: Supported 00:08:58.622 Flexible Data Placement Supported: Not Supported 00:08:58.622 00:08:58.622 Controller Memory Buffer Support 00:08:58.622 ================================ 00:08:58.622 Supported: No 00:08:58.622 00:08:58.622 Persistent Memory Region Support 00:08:58.622 ================================ 00:08:58.622 Supported: No 00:08:58.622 00:08:58.622 Admin Command Set Attributes 00:08:58.622 ============================ 00:08:58.622 Security Send/Receive: Not Supported 00:08:58.622 Format NVM: Supported 00:08:58.622 Firmware Activate/Download: Not Supported 00:08:58.622 Namespace Management: Supported 00:08:58.622 Device Self-Test: Not Supported 00:08:58.622 Directives: Supported 00:08:58.622 NVMe-MI: Not Supported 00:08:58.622 Virtualization Management: Not Supported 00:08:58.622 Doorbell Buffer Config: Supported 00:08:58.622 Get LBA Status Capability: Not Supported 00:08:58.622 Command & Feature Lockdown Capability: Not Supported 00:08:58.622 Abort Command Limit: 4 00:08:58.622 Async Event Request Limit: 4 00:08:58.622 Number of Firmware Slots: N/A 00:08:58.622 Firmware Slot 1 Read-Only: N/A 00:08:58.622 Firmware Activation Without Reset: N/A 00:08:58.622 Multiple Update Detection Support: N/A 00:08:58.622 Firmware Update Granularity: No Information Provided 00:08:58.622 Per-Namespace SMART Log: Yes 00:08:58.622 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.622 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:58.622 Command Effects Log Page: Supported 00:08:58.622 Get Log Page Extended Data: Supported 00:08:58.622 Telemetry Log Pages: Not Supported 00:08:58.622 Persistent Event Log Pages: Not Supported 00:08:58.622 Supported Log Pages Log Page: May Support 00:08:58.622 Commands Supported & Effects Log Page: Not Supported 00:08:58.622 Feature Identifiers & Effects Log Page:May Support 00:08:58.622 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.622 Data Area 4 for Telemetry Log: Not Supported 00:08:58.622 Error Log Page Entries Supported: 1 00:08:58.622 Keep Alive: Not Supported 00:08:58.622 00:08:58.622 NVM Command Set Attributes 00:08:58.622 ========================== 00:08:58.622 Submission Queue Entry Size 00:08:58.622 Max: 64 00:08:58.622 Min: 64 00:08:58.622 Completion Queue Entry Size 00:08:58.622 Max: 16 00:08:58.622 Min: 16 00:08:58.622 Number of Namespaces: 256 00:08:58.622 Compare Command: Supported 00:08:58.622 Write Uncorrectable Command: Not Supported 00:08:58.622 Dataset Management Command: Supported 00:08:58.622 Write Zeroes Command: Supported 00:08:58.622 Set Features Save Field: Supported 00:08:58.622 Reservations: Not Supported 00:08:58.622 Timestamp: Supported 00:08:58.622 Copy: Supported 00:08:58.622 Volatile Write Cache: Present 00:08:58.622 Atomic Write Unit (Normal): 1 00:08:58.622 Atomic Write Unit (PFail): 1 00:08:58.622 Atomic Compare & Write Unit: 1 00:08:58.622 Fused Compare & Write: Not Supported 00:08:58.622 Scatter-Gather List 00:08:58.622 SGL Command Set: Supported 00:08:58.622 SGL Keyed: Not Supported 00:08:58.622 SGL Bit Bucket Descriptor: Not Supported 00:08:58.622 SGL Metadata Pointer: Not Supported 00:08:58.622 Oversized SGL: Not Supported 00:08:58.622 SGL Metadata Address: Not Supported 00:08:58.622 SGL Offset: Not Supported 00:08:58.622 Transport SGL Data Block: Not Supported 00:08:58.622 Replay Protected Memory Block: Not Supported 00:08:58.622 00:08:58.622 Firmware Slot Information 00:08:58.622 ========================= 00:08:58.622 Active slot: 1 00:08:58.622 Slot 1 Firmware Revision: 1.0 00:08:58.622 00:08:58.622 00:08:58.622 Commands Supported and Effects 00:08:58.622 ============================== 00:08:58.622 Admin Commands 00:08:58.622 -------------- 00:08:58.622 Delete I/O Submission Queue (00h): Supported 00:08:58.622 Create I/O Submission Queue (01h): Supported 00:08:58.622 Get Log Page (02h): Supported 00:08:58.622 Delete I/O Completion Queue (04h): Supported 00:08:58.622 Create I/O Completion Queue (05h): Supported 00:08:58.622 Identify (06h): Supported 00:08:58.622 Abort (08h): Supported 00:08:58.622 Set Features (09h): Supported 00:08:58.622 Get Features (0Ah): Supported 00:08:58.622 Asynchronous Event Request (0Ch): Supported 00:08:58.622 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.622 Directive Send (19h): Supported 00:08:58.622 Directive Receive (1Ah): Supported 00:08:58.622 Virtualization Management (1Ch): Supported 00:08:58.622 Doorbell Buffer Config (7Ch): Supported 00:08:58.622 Format NVM (80h): Supported LBA-Change 00:08:58.622 I/O Commands 00:08:58.622 ------------ 00:08:58.622 Flush (00h): Supported LBA-Change 00:08:58.622 Write (01h): Supported LBA-Change 00:08:58.622 Read (02h): Supported 00:08:58.622 Compare (05h): Supported 00:08:58.622 Write Zeroes (08h): Supported LBA-Change 00:08:58.622 Dataset Management (09h): Supported LBA-Change 00:08:58.622 Unknown (0Ch): Supported 00:08:58.622 Unknown (12h): Supported 00:08:58.622 Copy (19h): Supported LBA-Change 00:08:58.622 Unknown (1Dh): Supported LBA-Change 00:08:58.622 00:08:58.622 Error Log 00:08:58.622 ========= 00:08:58.622 00:08:58.622 Arbitration 00:08:58.622 =========== 00:08:58.622 Arbitration Burst: no limit 00:08:58.622 00:08:58.623 Power Management 00:08:58.623 ================ 00:08:58.623 Number of Power States: 1 00:08:58.623 Current Power State: Power State #0 00:08:58.623 Power State #0: 00:08:58.623 Max Power: 25.00 W 00:08:58.623 Non-Operational State: Operational 00:08:58.623 Entry Latency: 16 microseconds 00:08:58.623 Exit Latency: 4 microseconds 00:08:58.623 Relative Read Throughput: 0 00:08:58.623 Relative Read Latency: 0 00:08:58.623 Relative Write Throughput: 0 00:08:58.623 Relative Write Latency: 0 00:08:58.882 Idle Power: Not Reported 00:08:58.882 Active Power: Not Reported 00:08:58.882 Non-Operational Permissive Mode: Not Supported 00:08:58.882 00:08:58.882 Health Information 00:08:58.882 ================== 00:08:58.882 Critical Warnings: 00:08:58.882 Available Spare Space: OK 00:08:58.882 Temperature: OK 00:08:58.882 Device Reliability: OK 00:08:58.882 Read Only: No 00:08:58.882 Volatile Memory Backup: OK 00:08:58.882 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.882 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.882 Available Spare: 0% 00:08:58.882 Available Spare Threshold: 0% 00:08:58.882 Life Percentage Used: 0% 00:08:58.882 Data Units Read: 737 00:08:58.882 Data Units Written: 665 00:08:58.882 Host Read Commands: 33018 00:08:58.882 Host Write Commands: 32804 00:08:58.882 Controller Busy Time: 0 minutes 00:08:58.882 Power Cycles: 0 00:08:58.882 Power On Hours: 0 hours 00:08:58.882 Unsafe Shutdowns: 0 00:08:58.882 Unrecoverable Media Errors: 0 00:08:58.882 Lifetime Error Log Entries: 0 00:08:58.882 Warning Temperature Time: 0 minutes 00:08:58.882 Critical Temperature Time: 0 minutes 00:08:58.882 00:08:58.882 Number of Queues 00:08:58.882 ================ 00:08:58.882 Number of I/O Submission Queues: 64 00:08:58.882 Number of I/O Completion Queues: 64 00:08:58.882 00:08:58.882 ZNS Specific Controller Data 00:08:58.882 ============================ 00:08:58.882 Zone Append Size Limit: 0 00:08:58.882 00:08:58.882 00:08:58.882 Active Namespaces 00:08:58.882 ================= 00:08:58.882 Namespace ID:1 00:08:58.882 Error Recovery Timeout: Unlimited 00:08:58.882 Command Set Identifier: NVM (00h) 00:08:58.882 Deallocate: Supported 00:08:58.882 Deallocated/Unwritten Error: Supported 00:08:58.882 Deallocated Read Value: All 0x00 00:08:58.882 Deallocate in Write Zeroes: Not Supported 00:08:58.882 Deallocated Guard Field: 0xFFFF 00:08:58.882 Flush: Supported 00:08:58.882 Reservation: Not Supported 00:08:58.882 Metadata Transferred as: Separate Metadata Buffer 00:08:58.882 Namespace Sharing Capabilities: Private 00:08:58.882 Size (in LBAs): 1548666 (5GiB) 00:08:58.882 Capacity (in LBAs): 1548666 (5GiB) 00:08:58.882 Utilization (in LBAs): 1548666 (5GiB) 00:08:58.882 Thin Provisioning: Not Supported 00:08:58.882 Per-NS Atomic Units: No 00:08:58.882 Maximum Single Source Range Length: 128 00:08:58.882 Maximum Copy Length: 128 00:08:58.882 Maximum Source Range Count: 128 00:08:58.882 NGUID/EUI64 Never Reused: No 00:08:58.882 Namespace Write Protected: No 00:08:58.882 Number of LBA Formats: 8 00:08:58.882 Current LBA Format: LBA Format #07 00:08:58.882 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.882 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.882 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.882 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.882 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.882 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.882 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.882 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.882 00:08:58.882 NVM Specific Namespace Data 00:08:58.882 =========================== 00:08:58.882 Logical Block Storage Tag Mask: 0 00:08:58.882 Protection Information Capabilities: 00:08:58.882 16b Guard Protection Information Storage Tag Support: No 00:08:58.882 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:58.882 Storage Tag Check Read Support: No 00:08:58.882 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:58.882 13:06:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:58.882 13:06:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:59.142 ===================================================== 00:08:59.142 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.142 ===================================================== 00:08:59.142 Controller Capabilities/Features 00:08:59.142 ================================ 00:08:59.142 Vendor ID: 1b36 00:08:59.142 Subsystem Vendor ID: 1af4 00:08:59.142 Serial Number: 12341 00:08:59.142 Model Number: QEMU NVMe Ctrl 00:08:59.142 Firmware Version: 8.0.0 00:08:59.142 Recommended Arb Burst: 6 00:08:59.142 IEEE OUI Identifier: 00 54 52 00:08:59.142 Multi-path I/O 00:08:59.142 May have multiple subsystem ports: No 00:08:59.142 May have multiple controllers: No 00:08:59.142 Associated with SR-IOV VF: No 00:08:59.142 Max Data Transfer Size: 524288 00:08:59.142 Max Number of Namespaces: 256 00:08:59.142 Max Number of I/O Queues: 64 00:08:59.142 NVMe Specification Version (VS): 1.4 00:08:59.142 NVMe Specification Version (Identify): 1.4 00:08:59.142 Maximum Queue Entries: 2048 00:08:59.142 Contiguous Queues Required: Yes 00:08:59.142 Arbitration Mechanisms Supported 00:08:59.142 Weighted Round Robin: Not Supported 00:08:59.142 Vendor Specific: Not Supported 00:08:59.142 Reset Timeout: 7500 ms 00:08:59.142 Doorbell Stride: 4 bytes 00:08:59.142 NVM Subsystem Reset: Not Supported 00:08:59.142 Command Sets Supported 00:08:59.142 NVM Command Set: Supported 00:08:59.142 Boot Partition: Not Supported 00:08:59.142 Memory Page Size Minimum: 4096 bytes 00:08:59.142 Memory Page Size Maximum: 65536 bytes 00:08:59.142 Persistent Memory Region: Not Supported 00:08:59.142 Optional Asynchronous Events Supported 00:08:59.142 Namespace Attribute Notices: Supported 00:08:59.142 Firmware Activation Notices: Not Supported 00:08:59.142 ANA Change Notices: Not Supported 00:08:59.142 PLE Aggregate Log Change Notices: Not Supported 00:08:59.142 LBA Status Info Alert Notices: Not Supported 00:08:59.142 EGE Aggregate Log Change Notices: Not Supported 00:08:59.142 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.142 Zone Descriptor Change Notices: Not Supported 00:08:59.142 Discovery Log Change Notices: Not Supported 00:08:59.142 Controller Attributes 00:08:59.142 128-bit Host Identifier: Not Supported 00:08:59.142 Non-Operational Permissive Mode: Not Supported 00:08:59.142 NVM Sets: Not Supported 00:08:59.142 Read Recovery Levels: Not Supported 00:08:59.142 Endurance Groups: Not Supported 00:08:59.142 Predictable Latency Mode: Not Supported 00:08:59.142 Traffic Based Keep ALive: Not Supported 00:08:59.142 Namespace Granularity: Not Supported 00:08:59.142 SQ Associations: Not Supported 00:08:59.142 UUID List: Not Supported 00:08:59.142 Multi-Domain Subsystem: Not Supported 00:08:59.142 Fixed Capacity Management: Not Supported 00:08:59.142 Variable Capacity Management: Not Supported 00:08:59.142 Delete Endurance Group: Not Supported 00:08:59.142 Delete NVM Set: Not Supported 00:08:59.142 Extended LBA Formats Supported: Supported 00:08:59.142 Flexible Data Placement Supported: Not Supported 00:08:59.142 00:08:59.142 Controller Memory Buffer Support 00:08:59.142 ================================ 00:08:59.142 Supported: No 00:08:59.142 00:08:59.142 Persistent Memory Region Support 00:08:59.142 ================================ 00:08:59.142 Supported: No 00:08:59.142 00:08:59.142 Admin Command Set Attributes 00:08:59.142 ============================ 00:08:59.142 Security Send/Receive: Not Supported 00:08:59.142 Format NVM: Supported 00:08:59.142 Firmware Activate/Download: Not Supported 00:08:59.142 Namespace Management: Supported 00:08:59.142 Device Self-Test: Not Supported 00:08:59.143 Directives: Supported 00:08:59.143 NVMe-MI: Not Supported 00:08:59.143 Virtualization Management: Not Supported 00:08:59.143 Doorbell Buffer Config: Supported 00:08:59.143 Get LBA Status Capability: Not Supported 00:08:59.143 Command & Feature Lockdown Capability: Not Supported 00:08:59.143 Abort Command Limit: 4 00:08:59.143 Async Event Request Limit: 4 00:08:59.143 Number of Firmware Slots: N/A 00:08:59.143 Firmware Slot 1 Read-Only: N/A 00:08:59.143 Firmware Activation Without Reset: N/A 00:08:59.143 Multiple Update Detection Support: N/A 00:08:59.143 Firmware Update Granularity: No Information Provided 00:08:59.143 Per-Namespace SMART Log: Yes 00:08:59.143 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.143 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:59.143 Command Effects Log Page: Supported 00:08:59.143 Get Log Page Extended Data: Supported 00:08:59.143 Telemetry Log Pages: Not Supported 00:08:59.143 Persistent Event Log Pages: Not Supported 00:08:59.143 Supported Log Pages Log Page: May Support 00:08:59.143 Commands Supported & Effects Log Page: Not Supported 00:08:59.143 Feature Identifiers & Effects Log Page:May Support 00:08:59.143 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.143 Data Area 4 for Telemetry Log: Not Supported 00:08:59.143 Error Log Page Entries Supported: 1 00:08:59.143 Keep Alive: Not Supported 00:08:59.143 00:08:59.143 NVM Command Set Attributes 00:08:59.143 ========================== 00:08:59.143 Submission Queue Entry Size 00:08:59.143 Max: 64 00:08:59.143 Min: 64 00:08:59.143 Completion Queue Entry Size 00:08:59.143 Max: 16 00:08:59.143 Min: 16 00:08:59.143 Number of Namespaces: 256 00:08:59.143 Compare Command: Supported 00:08:59.143 Write Uncorrectable Command: Not Supported 00:08:59.143 Dataset Management Command: Supported 00:08:59.143 Write Zeroes Command: Supported 00:08:59.143 Set Features Save Field: Supported 00:08:59.143 Reservations: Not Supported 00:08:59.143 Timestamp: Supported 00:08:59.143 Copy: Supported 00:08:59.143 Volatile Write Cache: Present 00:08:59.143 Atomic Write Unit (Normal): 1 00:08:59.143 Atomic Write Unit (PFail): 1 00:08:59.143 Atomic Compare & Write Unit: 1 00:08:59.143 Fused Compare & Write: Not Supported 00:08:59.143 Scatter-Gather List 00:08:59.143 SGL Command Set: Supported 00:08:59.143 SGL Keyed: Not Supported 00:08:59.143 SGL Bit Bucket Descriptor: Not Supported 00:08:59.143 SGL Metadata Pointer: Not Supported 00:08:59.143 Oversized SGL: Not Supported 00:08:59.143 SGL Metadata Address: Not Supported 00:08:59.143 SGL Offset: Not Supported 00:08:59.143 Transport SGL Data Block: Not Supported 00:08:59.143 Replay Protected Memory Block: Not Supported 00:08:59.143 00:08:59.143 Firmware Slot Information 00:08:59.143 ========================= 00:08:59.143 Active slot: 1 00:08:59.143 Slot 1 Firmware Revision: 1.0 00:08:59.143 00:08:59.143 00:08:59.143 Commands Supported and Effects 00:08:59.143 ============================== 00:08:59.143 Admin Commands 00:08:59.143 -------------- 00:08:59.143 Delete I/O Submission Queue (00h): Supported 00:08:59.143 Create I/O Submission Queue (01h): Supported 00:08:59.143 Get Log Page (02h): Supported 00:08:59.143 Delete I/O Completion Queue (04h): Supported 00:08:59.143 Create I/O Completion Queue (05h): Supported 00:08:59.143 Identify (06h): Supported 00:08:59.143 Abort (08h): Supported 00:08:59.143 Set Features (09h): Supported 00:08:59.143 Get Features (0Ah): Supported 00:08:59.143 Asynchronous Event Request (0Ch): Supported 00:08:59.143 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.143 Directive Send (19h): Supported 00:08:59.143 Directive Receive (1Ah): Supported 00:08:59.143 Virtualization Management (1Ch): Supported 00:08:59.143 Doorbell Buffer Config (7Ch): Supported 00:08:59.143 Format NVM (80h): Supported LBA-Change 00:08:59.143 I/O Commands 00:08:59.143 ------------ 00:08:59.143 Flush (00h): Supported LBA-Change 00:08:59.143 Write (01h): Supported LBA-Change 00:08:59.143 Read (02h): Supported 00:08:59.143 Compare (05h): Supported 00:08:59.143 Write Zeroes (08h): Supported LBA-Change 00:08:59.143 Dataset Management (09h): Supported LBA-Change 00:08:59.143 Unknown (0Ch): Supported 00:08:59.143 Unknown (12h): Supported 00:08:59.143 Copy (19h): Supported LBA-Change 00:08:59.143 Unknown (1Dh): Supported LBA-Change 00:08:59.143 00:08:59.143 Error Log 00:08:59.143 ========= 00:08:59.143 00:08:59.143 Arbitration 00:08:59.143 =========== 00:08:59.143 Arbitration Burst: no limit 00:08:59.143 00:08:59.143 Power Management 00:08:59.143 ================ 00:08:59.143 Number of Power States: 1 00:08:59.143 Current Power State: Power State #0 00:08:59.143 Power State #0: 00:08:59.143 Max Power: 25.00 W 00:08:59.143 Non-Operational State: Operational 00:08:59.143 Entry Latency: 16 microseconds 00:08:59.143 Exit Latency: 4 microseconds 00:08:59.143 Relative Read Throughput: 0 00:08:59.143 Relative Read Latency: 0 00:08:59.143 Relative Write Throughput: 0 00:08:59.143 Relative Write Latency: 0 00:08:59.143 Idle Power: Not Reported 00:08:59.143 Active Power: Not Reported 00:08:59.143 Non-Operational Permissive Mode: Not Supported 00:08:59.143 00:08:59.143 Health Information 00:08:59.143 ================== 00:08:59.143 Critical Warnings: 00:08:59.143 Available Spare Space: OK 00:08:59.143 Temperature: OK 00:08:59.143 Device Reliability: OK 00:08:59.143 Read Only: No 00:08:59.143 Volatile Memory Backup: OK 00:08:59.143 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.143 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.143 Available Spare: 0% 00:08:59.143 Available Spare Threshold: 0% 00:08:59.143 Life Percentage Used: 0% 00:08:59.143 Data Units Read: 1140 00:08:59.143 Data Units Written: 1007 00:08:59.143 Host Read Commands: 49944 00:08:59.143 Host Write Commands: 48735 00:08:59.143 Controller Busy Time: 0 minutes 00:08:59.143 Power Cycles: 0 00:08:59.143 Power On Hours: 0 hours 00:08:59.143 Unsafe Shutdowns: 0 00:08:59.143 Unrecoverable Media Errors: 0 00:08:59.143 Lifetime Error Log Entries: 0 00:08:59.143 Warning Temperature Time: 0 minutes 00:08:59.143 Critical Temperature Time: 0 minutes 00:08:59.143 00:08:59.143 Number of Queues 00:08:59.143 ================ 00:08:59.143 Number of I/O Submission Queues: 64 00:08:59.143 Number of I/O Completion Queues: 64 00:08:59.143 00:08:59.143 ZNS Specific Controller Data 00:08:59.143 ============================ 00:08:59.143 Zone Append Size Limit: 0 00:08:59.143 00:08:59.143 00:08:59.143 Active Namespaces 00:08:59.143 ================= 00:08:59.143 Namespace ID:1 00:08:59.143 Error Recovery Timeout: Unlimited 00:08:59.143 Command Set Identifier: NVM (00h) 00:08:59.143 Deallocate: Supported 00:08:59.143 Deallocated/Unwritten Error: Supported 00:08:59.143 Deallocated Read Value: All 0x00 00:08:59.143 Deallocate in Write Zeroes: Not Supported 00:08:59.143 Deallocated Guard Field: 0xFFFF 00:08:59.143 Flush: Supported 00:08:59.143 Reservation: Not Supported 00:08:59.143 Namespace Sharing Capabilities: Private 00:08:59.143 Size (in LBAs): 1310720 (5GiB) 00:08:59.143 Capacity (in LBAs): 1310720 (5GiB) 00:08:59.143 Utilization (in LBAs): 1310720 (5GiB) 00:08:59.143 Thin Provisioning: Not Supported 00:08:59.143 Per-NS Atomic Units: No 00:08:59.143 Maximum Single Source Range Length: 128 00:08:59.143 Maximum Copy Length: 128 00:08:59.143 Maximum Source Range Count: 128 00:08:59.143 NGUID/EUI64 Never Reused: No 00:08:59.143 Namespace Write Protected: No 00:08:59.143 Number of LBA Formats: 8 00:08:59.143 Current LBA Format: LBA Format #04 00:08:59.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.143 00:08:59.143 NVM Specific Namespace Data 00:08:59.143 =========================== 00:08:59.143 Logical Block Storage Tag Mask: 0 00:08:59.143 Protection Information Capabilities: 00:08:59.143 16b Guard Protection Information Storage Tag Support: No 00:08:59.143 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.143 Storage Tag Check Read Support: No 00:08:59.143 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.144 13:06:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:59.144 13:06:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:59.404 ===================================================== 00:08:59.404 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.404 ===================================================== 00:08:59.404 Controller Capabilities/Features 00:08:59.404 ================================ 00:08:59.404 Vendor ID: 1b36 00:08:59.404 Subsystem Vendor ID: 1af4 00:08:59.404 Serial Number: 12342 00:08:59.404 Model Number: QEMU NVMe Ctrl 00:08:59.404 Firmware Version: 8.0.0 00:08:59.404 Recommended Arb Burst: 6 00:08:59.404 IEEE OUI Identifier: 00 54 52 00:08:59.404 Multi-path I/O 00:08:59.404 May have multiple subsystem ports: No 00:08:59.404 May have multiple controllers: No 00:08:59.404 Associated with SR-IOV VF: No 00:08:59.404 Max Data Transfer Size: 524288 00:08:59.404 Max Number of Namespaces: 256 00:08:59.404 Max Number of I/O Queues: 64 00:08:59.404 NVMe Specification Version (VS): 1.4 00:08:59.404 NVMe Specification Version (Identify): 1.4 00:08:59.404 Maximum Queue Entries: 2048 00:08:59.404 Contiguous Queues Required: Yes 00:08:59.404 Arbitration Mechanisms Supported 00:08:59.404 Weighted Round Robin: Not Supported 00:08:59.404 Vendor Specific: Not Supported 00:08:59.404 Reset Timeout: 7500 ms 00:08:59.404 Doorbell Stride: 4 bytes 00:08:59.404 NVM Subsystem Reset: Not Supported 00:08:59.404 Command Sets Supported 00:08:59.404 NVM Command Set: Supported 00:08:59.404 Boot Partition: Not Supported 00:08:59.404 Memory Page Size Minimum: 4096 bytes 00:08:59.404 Memory Page Size Maximum: 65536 bytes 00:08:59.404 Persistent Memory Region: Not Supported 00:08:59.404 Optional Asynchronous Events Supported 00:08:59.404 Namespace Attribute Notices: Supported 00:08:59.404 Firmware Activation Notices: Not Supported 00:08:59.404 ANA Change Notices: Not Supported 00:08:59.404 PLE Aggregate Log Change Notices: Not Supported 00:08:59.404 LBA Status Info Alert Notices: Not Supported 00:08:59.404 EGE Aggregate Log Change Notices: Not Supported 00:08:59.404 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.404 Zone Descriptor Change Notices: Not Supported 00:08:59.404 Discovery Log Change Notices: Not Supported 00:08:59.404 Controller Attributes 00:08:59.404 128-bit Host Identifier: Not Supported 00:08:59.404 Non-Operational Permissive Mode: Not Supported 00:08:59.404 NVM Sets: Not Supported 00:08:59.404 Read Recovery Levels: Not Supported 00:08:59.404 Endurance Groups: Not Supported 00:08:59.404 Predictable Latency Mode: Not Supported 00:08:59.404 Traffic Based Keep ALive: Not Supported 00:08:59.404 Namespace Granularity: Not Supported 00:08:59.404 SQ Associations: Not Supported 00:08:59.404 UUID List: Not Supported 00:08:59.404 Multi-Domain Subsystem: Not Supported 00:08:59.404 Fixed Capacity Management: Not Supported 00:08:59.404 Variable Capacity Management: Not Supported 00:08:59.404 Delete Endurance Group: Not Supported 00:08:59.404 Delete NVM Set: Not Supported 00:08:59.404 Extended LBA Formats Supported: Supported 00:08:59.404 Flexible Data Placement Supported: Not Supported 00:08:59.404 00:08:59.404 Controller Memory Buffer Support 00:08:59.404 ================================ 00:08:59.404 Supported: No 00:08:59.404 00:08:59.404 Persistent Memory Region Support 00:08:59.404 ================================ 00:08:59.404 Supported: No 00:08:59.404 00:08:59.404 Admin Command Set Attributes 00:08:59.404 ============================ 00:08:59.404 Security Send/Receive: Not Supported 00:08:59.404 Format NVM: Supported 00:08:59.404 Firmware Activate/Download: Not Supported 00:08:59.404 Namespace Management: Supported 00:08:59.404 Device Self-Test: Not Supported 00:08:59.404 Directives: Supported 00:08:59.404 NVMe-MI: Not Supported 00:08:59.404 Virtualization Management: Not Supported 00:08:59.404 Doorbell Buffer Config: Supported 00:08:59.404 Get LBA Status Capability: Not Supported 00:08:59.404 Command & Feature Lockdown Capability: Not Supported 00:08:59.404 Abort Command Limit: 4 00:08:59.404 Async Event Request Limit: 4 00:08:59.404 Number of Firmware Slots: N/A 00:08:59.404 Firmware Slot 1 Read-Only: N/A 00:08:59.404 Firmware Activation Without Reset: N/A 00:08:59.405 Multiple Update Detection Support: N/A 00:08:59.405 Firmware Update Granularity: No Information Provided 00:08:59.405 Per-Namespace SMART Log: Yes 00:08:59.405 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.405 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:59.405 Command Effects Log Page: Supported 00:08:59.405 Get Log Page Extended Data: Supported 00:08:59.405 Telemetry Log Pages: Not Supported 00:08:59.405 Persistent Event Log Pages: Not Supported 00:08:59.405 Supported Log Pages Log Page: May Support 00:08:59.405 Commands Supported & Effects Log Page: Not Supported 00:08:59.405 Feature Identifiers & Effects Log Page:May Support 00:08:59.405 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.405 Data Area 4 for Telemetry Log: Not Supported 00:08:59.405 Error Log Page Entries Supported: 1 00:08:59.405 Keep Alive: Not Supported 00:08:59.405 00:08:59.405 NVM Command Set Attributes 00:08:59.405 ========================== 00:08:59.405 Submission Queue Entry Size 00:08:59.405 Max: 64 00:08:59.405 Min: 64 00:08:59.405 Completion Queue Entry Size 00:08:59.405 Max: 16 00:08:59.405 Min: 16 00:08:59.405 Number of Namespaces: 256 00:08:59.405 Compare Command: Supported 00:08:59.405 Write Uncorrectable Command: Not Supported 00:08:59.405 Dataset Management Command: Supported 00:08:59.405 Write Zeroes Command: Supported 00:08:59.405 Set Features Save Field: Supported 00:08:59.405 Reservations: Not Supported 00:08:59.405 Timestamp: Supported 00:08:59.405 Copy: Supported 00:08:59.405 Volatile Write Cache: Present 00:08:59.405 Atomic Write Unit (Normal): 1 00:08:59.405 Atomic Write Unit (PFail): 1 00:08:59.405 Atomic Compare & Write Unit: 1 00:08:59.405 Fused Compare & Write: Not Supported 00:08:59.405 Scatter-Gather List 00:08:59.405 SGL Command Set: Supported 00:08:59.405 SGL Keyed: Not Supported 00:08:59.405 SGL Bit Bucket Descriptor: Not Supported 00:08:59.405 SGL Metadata Pointer: Not Supported 00:08:59.405 Oversized SGL: Not Supported 00:08:59.405 SGL Metadata Address: Not Supported 00:08:59.405 SGL Offset: Not Supported 00:08:59.405 Transport SGL Data Block: Not Supported 00:08:59.405 Replay Protected Memory Block: Not Supported 00:08:59.405 00:08:59.405 Firmware Slot Information 00:08:59.405 ========================= 00:08:59.405 Active slot: 1 00:08:59.405 Slot 1 Firmware Revision: 1.0 00:08:59.405 00:08:59.405 00:08:59.405 Commands Supported and Effects 00:08:59.405 ============================== 00:08:59.405 Admin Commands 00:08:59.405 -------------- 00:08:59.405 Delete I/O Submission Queue (00h): Supported 00:08:59.405 Create I/O Submission Queue (01h): Supported 00:08:59.405 Get Log Page (02h): Supported 00:08:59.405 Delete I/O Completion Queue (04h): Supported 00:08:59.405 Create I/O Completion Queue (05h): Supported 00:08:59.405 Identify (06h): Supported 00:08:59.405 Abort (08h): Supported 00:08:59.405 Set Features (09h): Supported 00:08:59.405 Get Features (0Ah): Supported 00:08:59.405 Asynchronous Event Request (0Ch): Supported 00:08:59.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.405 Directive Send (19h): Supported 00:08:59.405 Directive Receive (1Ah): Supported 00:08:59.405 Virtualization Management (1Ch): Supported 00:08:59.405 Doorbell Buffer Config (7Ch): Supported 00:08:59.405 Format NVM (80h): Supported LBA-Change 00:08:59.405 I/O Commands 00:08:59.405 ------------ 00:08:59.405 Flush (00h): Supported LBA-Change 00:08:59.405 Write (01h): Supported LBA-Change 00:08:59.405 Read (02h): Supported 00:08:59.405 Compare (05h): Supported 00:08:59.405 Write Zeroes (08h): Supported LBA-Change 00:08:59.405 Dataset Management (09h): Supported LBA-Change 00:08:59.405 Unknown (0Ch): Supported 00:08:59.405 Unknown (12h): Supported 00:08:59.405 Copy (19h): Supported LBA-Change 00:08:59.405 Unknown (1Dh): Supported LBA-Change 00:08:59.405 00:08:59.405 Error Log 00:08:59.405 ========= 00:08:59.405 00:08:59.405 Arbitration 00:08:59.405 =========== 00:08:59.405 Arbitration Burst: no limit 00:08:59.405 00:08:59.405 Power Management 00:08:59.405 ================ 00:08:59.405 Number of Power States: 1 00:08:59.405 Current Power State: Power State #0 00:08:59.405 Power State #0: 00:08:59.405 Max Power: 25.00 W 00:08:59.405 Non-Operational State: Operational 00:08:59.405 Entry Latency: 16 microseconds 00:08:59.405 Exit Latency: 4 microseconds 00:08:59.405 Relative Read Throughput: 0 00:08:59.405 Relative Read Latency: 0 00:08:59.405 Relative Write Throughput: 0 00:08:59.405 Relative Write Latency: 0 00:08:59.405 Idle Power: Not Reported 00:08:59.405 Active Power: Not Reported 00:08:59.405 Non-Operational Permissive Mode: Not Supported 00:08:59.405 00:08:59.405 Health Information 00:08:59.405 ================== 00:08:59.405 Critical Warnings: 00:08:59.405 Available Spare Space: OK 00:08:59.405 Temperature: OK 00:08:59.405 Device Reliability: OK 00:08:59.405 Read Only: No 00:08:59.405 Volatile Memory Backup: OK 00:08:59.405 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.405 Available Spare: 0% 00:08:59.405 Available Spare Threshold: 0% 00:08:59.405 Life Percentage Used: 0% 00:08:59.405 Data Units Read: 2339 00:08:59.405 Data Units Written: 2127 00:08:59.405 Host Read Commands: 100777 00:08:59.405 Host Write Commands: 99046 00:08:59.405 Controller Busy Time: 0 minutes 00:08:59.405 Power Cycles: 0 00:08:59.405 Power On Hours: 0 hours 00:08:59.405 Unsafe Shutdowns: 0 00:08:59.405 Unrecoverable Media Errors: 0 00:08:59.405 Lifetime Error Log Entries: 0 00:08:59.405 Warning Temperature Time: 0 minutes 00:08:59.405 Critical Temperature Time: 0 minutes 00:08:59.405 00:08:59.405 Number of Queues 00:08:59.405 ================ 00:08:59.405 Number of I/O Submission Queues: 64 00:08:59.405 Number of I/O Completion Queues: 64 00:08:59.405 00:08:59.405 ZNS Specific Controller Data 00:08:59.405 ============================ 00:08:59.405 Zone Append Size Limit: 0 00:08:59.405 00:08:59.405 00:08:59.405 Active Namespaces 00:08:59.405 ================= 00:08:59.405 Namespace ID:1 00:08:59.405 Error Recovery Timeout: Unlimited 00:08:59.405 Command Set Identifier: NVM (00h) 00:08:59.405 Deallocate: Supported 00:08:59.405 Deallocated/Unwritten Error: Supported 00:08:59.405 Deallocated Read Value: All 0x00 00:08:59.405 Deallocate in Write Zeroes: Not Supported 00:08:59.405 Deallocated Guard Field: 0xFFFF 00:08:59.405 Flush: Supported 00:08:59.405 Reservation: Not Supported 00:08:59.405 Namespace Sharing Capabilities: Private 00:08:59.405 Size (in LBAs): 1048576 (4GiB) 00:08:59.405 Capacity (in LBAs): 1048576 (4GiB) 00:08:59.405 Utilization (in LBAs): 1048576 (4GiB) 00:08:59.405 Thin Provisioning: Not Supported 00:08:59.405 Per-NS Atomic Units: No 00:08:59.405 Maximum Single Source Range Length: 128 00:08:59.405 Maximum Copy Length: 128 00:08:59.405 Maximum Source Range Count: 128 00:08:59.405 NGUID/EUI64 Never Reused: No 00:08:59.405 Namespace Write Protected: No 00:08:59.405 Number of LBA Formats: 8 00:08:59.405 Current LBA Format: LBA Format #04 00:08:59.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.405 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.405 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.405 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.405 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.405 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.405 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.405 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.405 00:08:59.405 NVM Specific Namespace Data 00:08:59.405 =========================== 00:08:59.405 Logical Block Storage Tag Mask: 0 00:08:59.405 Protection Information Capabilities: 00:08:59.405 16b Guard Protection Information Storage Tag Support: No 00:08:59.405 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.405 Storage Tag Check Read Support: No 00:08:59.405 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.405 Namespace ID:2 00:08:59.405 Error Recovery Timeout: Unlimited 00:08:59.405 Command Set Identifier: NVM (00h) 00:08:59.405 Deallocate: Supported 00:08:59.405 Deallocated/Unwritten Error: Supported 00:08:59.405 Deallocated Read Value: All 0x00 00:08:59.406 Deallocate in Write Zeroes: Not Supported 00:08:59.406 Deallocated Guard Field: 0xFFFF 00:08:59.406 Flush: Supported 00:08:59.406 Reservation: Not Supported 00:08:59.406 Namespace Sharing Capabilities: Private 00:08:59.406 Size (in LBAs): 1048576 (4GiB) 00:08:59.406 Capacity (in LBAs): 1048576 (4GiB) 00:08:59.406 Utilization (in LBAs): 1048576 (4GiB) 00:08:59.406 Thin Provisioning: Not Supported 00:08:59.406 Per-NS Atomic Units: No 00:08:59.406 Maximum Single Source Range Length: 128 00:08:59.406 Maximum Copy Length: 128 00:08:59.406 Maximum Source Range Count: 128 00:08:59.406 NGUID/EUI64 Never Reused: No 00:08:59.406 Namespace Write Protected: No 00:08:59.406 Number of LBA Formats: 8 00:08:59.406 Current LBA Format: LBA Format #04 00:08:59.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.406 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.406 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.406 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.406 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.406 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.406 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.406 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.406 00:08:59.406 NVM Specific Namespace Data 00:08:59.406 =========================== 00:08:59.406 Logical Block Storage Tag Mask: 0 00:08:59.406 Protection Information Capabilities: 00:08:59.406 16b Guard Protection Information Storage Tag Support: No 00:08:59.406 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.406 Storage Tag Check Read Support: No 00:08:59.406 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Namespace ID:3 00:08:59.406 Error Recovery Timeout: Unlimited 00:08:59.406 Command Set Identifier: NVM (00h) 00:08:59.406 Deallocate: Supported 00:08:59.406 Deallocated/Unwritten Error: Supported 00:08:59.406 Deallocated Read Value: All 0x00 00:08:59.406 Deallocate in Write Zeroes: Not Supported 00:08:59.406 Deallocated Guard Field: 0xFFFF 00:08:59.406 Flush: Supported 00:08:59.406 Reservation: Not Supported 00:08:59.406 Namespace Sharing Capabilities: Private 00:08:59.406 Size (in LBAs): 1048576 (4GiB) 00:08:59.406 Capacity (in LBAs): 1048576 (4GiB) 00:08:59.406 Utilization (in LBAs): 1048576 (4GiB) 00:08:59.406 Thin Provisioning: Not Supported 00:08:59.406 Per-NS Atomic Units: No 00:08:59.406 Maximum Single Source Range Length: 128 00:08:59.406 Maximum Copy Length: 128 00:08:59.406 Maximum Source Range Count: 128 00:08:59.406 NGUID/EUI64 Never Reused: No 00:08:59.406 Namespace Write Protected: No 00:08:59.406 Number of LBA Formats: 8 00:08:59.406 Current LBA Format: LBA Format #04 00:08:59.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.406 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.406 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.406 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.406 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.406 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.406 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.406 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.406 00:08:59.406 NVM Specific Namespace Data 00:08:59.406 =========================== 00:08:59.406 Logical Block Storage Tag Mask: 0 00:08:59.406 Protection Information Capabilities: 00:08:59.406 16b Guard Protection Information Storage Tag Support: No 00:08:59.406 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.406 Storage Tag Check Read Support: No 00:08:59.406 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.406 13:06:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:59.406 13:06:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:59.666 ===================================================== 00:08:59.666 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.666 ===================================================== 00:08:59.666 Controller Capabilities/Features 00:08:59.666 ================================ 00:08:59.666 Vendor ID: 1b36 00:08:59.666 Subsystem Vendor ID: 1af4 00:08:59.666 Serial Number: 12343 00:08:59.666 Model Number: QEMU NVMe Ctrl 00:08:59.666 Firmware Version: 8.0.0 00:08:59.666 Recommended Arb Burst: 6 00:08:59.666 IEEE OUI Identifier: 00 54 52 00:08:59.666 Multi-path I/O 00:08:59.666 May have multiple subsystem ports: No 00:08:59.666 May have multiple controllers: Yes 00:08:59.666 Associated with SR-IOV VF: No 00:08:59.666 Max Data Transfer Size: 524288 00:08:59.666 Max Number of Namespaces: 256 00:08:59.666 Max Number of I/O Queues: 64 00:08:59.666 NVMe Specification Version (VS): 1.4 00:08:59.666 NVMe Specification Version (Identify): 1.4 00:08:59.666 Maximum Queue Entries: 2048 00:08:59.666 Contiguous Queues Required: Yes 00:08:59.666 Arbitration Mechanisms Supported 00:08:59.666 Weighted Round Robin: Not Supported 00:08:59.666 Vendor Specific: Not Supported 00:08:59.666 Reset Timeout: 7500 ms 00:08:59.666 Doorbell Stride: 4 bytes 00:08:59.666 NVM Subsystem Reset: Not Supported 00:08:59.666 Command Sets Supported 00:08:59.666 NVM Command Set: Supported 00:08:59.666 Boot Partition: Not Supported 00:08:59.666 Memory Page Size Minimum: 4096 bytes 00:08:59.666 Memory Page Size Maximum: 65536 bytes 00:08:59.666 Persistent Memory Region: Not Supported 00:08:59.666 Optional Asynchronous Events Supported 00:08:59.666 Namespace Attribute Notices: Supported 00:08:59.666 Firmware Activation Notices: Not Supported 00:08:59.666 ANA Change Notices: Not Supported 00:08:59.666 PLE Aggregate Log Change Notices: Not Supported 00:08:59.666 LBA Status Info Alert Notices: Not Supported 00:08:59.666 EGE Aggregate Log Change Notices: Not Supported 00:08:59.666 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.666 Zone Descriptor Change Notices: Not Supported 00:08:59.666 Discovery Log Change Notices: Not Supported 00:08:59.666 Controller Attributes 00:08:59.666 128-bit Host Identifier: Not Supported 00:08:59.666 Non-Operational Permissive Mode: Not Supported 00:08:59.666 NVM Sets: Not Supported 00:08:59.666 Read Recovery Levels: Not Supported 00:08:59.666 Endurance Groups: Supported 00:08:59.666 Predictable Latency Mode: Not Supported 00:08:59.666 Traffic Based Keep ALive: Not Supported 00:08:59.666 Namespace Granularity: Not Supported 00:08:59.666 SQ Associations: Not Supported 00:08:59.666 UUID List: Not Supported 00:08:59.666 Multi-Domain Subsystem: Not Supported 00:08:59.666 Fixed Capacity Management: Not Supported 00:08:59.666 Variable Capacity Management: Not Supported 00:08:59.666 Delete Endurance Group: Not Supported 00:08:59.666 Delete NVM Set: Not Supported 00:08:59.666 Extended LBA Formats Supported: Supported 00:08:59.666 Flexible Data Placement Supported: Supported 00:08:59.666 00:08:59.666 Controller Memory Buffer Support 00:08:59.666 ================================ 00:08:59.666 Supported: No 00:08:59.666 00:08:59.666 Persistent Memory Region Support 00:08:59.666 ================================ 00:08:59.666 Supported: No 00:08:59.666 00:08:59.666 Admin Command Set Attributes 00:08:59.666 ============================ 00:08:59.666 Security Send/Receive: Not Supported 00:08:59.666 Format NVM: Supported 00:08:59.666 Firmware Activate/Download: Not Supported 00:08:59.666 Namespace Management: Supported 00:08:59.666 Device Self-Test: Not Supported 00:08:59.666 Directives: Supported 00:08:59.666 NVMe-MI: Not Supported 00:08:59.666 Virtualization Management: Not Supported 00:08:59.666 Doorbell Buffer Config: Supported 00:08:59.666 Get LBA Status Capability: Not Supported 00:08:59.666 Command & Feature Lockdown Capability: Not Supported 00:08:59.666 Abort Command Limit: 4 00:08:59.666 Async Event Request Limit: 4 00:08:59.666 Number of Firmware Slots: N/A 00:08:59.666 Firmware Slot 1 Read-Only: N/A 00:08:59.666 Firmware Activation Without Reset: N/A 00:08:59.666 Multiple Update Detection Support: N/A 00:08:59.666 Firmware Update Granularity: No Information Provided 00:08:59.666 Per-Namespace SMART Log: Yes 00:08:59.666 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.666 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:59.666 Command Effects Log Page: Supported 00:08:59.666 Get Log Page Extended Data: Supported 00:08:59.666 Telemetry Log Pages: Not Supported 00:08:59.666 Persistent Event Log Pages: Not Supported 00:08:59.666 Supported Log Pages Log Page: May Support 00:08:59.666 Commands Supported & Effects Log Page: Not Supported 00:08:59.666 Feature Identifiers & Effects Log Page:May Support 00:08:59.666 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.666 Data Area 4 for Telemetry Log: Not Supported 00:08:59.666 Error Log Page Entries Supported: 1 00:08:59.666 Keep Alive: Not Supported 00:08:59.666 00:08:59.666 NVM Command Set Attributes 00:08:59.666 ========================== 00:08:59.666 Submission Queue Entry Size 00:08:59.666 Max: 64 00:08:59.666 Min: 64 00:08:59.666 Completion Queue Entry Size 00:08:59.666 Max: 16 00:08:59.666 Min: 16 00:08:59.666 Number of Namespaces: 256 00:08:59.666 Compare Command: Supported 00:08:59.666 Write Uncorrectable Command: Not Supported 00:08:59.666 Dataset Management Command: Supported 00:08:59.666 Write Zeroes Command: Supported 00:08:59.666 Set Features Save Field: Supported 00:08:59.666 Reservations: Not Supported 00:08:59.666 Timestamp: Supported 00:08:59.666 Copy: Supported 00:08:59.666 Volatile Write Cache: Present 00:08:59.666 Atomic Write Unit (Normal): 1 00:08:59.666 Atomic Write Unit (PFail): 1 00:08:59.666 Atomic Compare & Write Unit: 1 00:08:59.666 Fused Compare & Write: Not Supported 00:08:59.666 Scatter-Gather List 00:08:59.666 SGL Command Set: Supported 00:08:59.666 SGL Keyed: Not Supported 00:08:59.666 SGL Bit Bucket Descriptor: Not Supported 00:08:59.666 SGL Metadata Pointer: Not Supported 00:08:59.666 Oversized SGL: Not Supported 00:08:59.666 SGL Metadata Address: Not Supported 00:08:59.666 SGL Offset: Not Supported 00:08:59.666 Transport SGL Data Block: Not Supported 00:08:59.666 Replay Protected Memory Block: Not Supported 00:08:59.666 00:08:59.666 Firmware Slot Information 00:08:59.666 ========================= 00:08:59.666 Active slot: 1 00:08:59.666 Slot 1 Firmware Revision: 1.0 00:08:59.666 00:08:59.666 00:08:59.666 Commands Supported and Effects 00:08:59.666 ============================== 00:08:59.666 Admin Commands 00:08:59.666 -------------- 00:08:59.666 Delete I/O Submission Queue (00h): Supported 00:08:59.666 Create I/O Submission Queue (01h): Supported 00:08:59.666 Get Log Page (02h): Supported 00:08:59.666 Delete I/O Completion Queue (04h): Supported 00:08:59.666 Create I/O Completion Queue (05h): Supported 00:08:59.666 Identify (06h): Supported 00:08:59.666 Abort (08h): Supported 00:08:59.666 Set Features (09h): Supported 00:08:59.666 Get Features (0Ah): Supported 00:08:59.666 Asynchronous Event Request (0Ch): Supported 00:08:59.666 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.666 Directive Send (19h): Supported 00:08:59.666 Directive Receive (1Ah): Supported 00:08:59.666 Virtualization Management (1Ch): Supported 00:08:59.666 Doorbell Buffer Config (7Ch): Supported 00:08:59.667 Format NVM (80h): Supported LBA-Change 00:08:59.667 I/O Commands 00:08:59.667 ------------ 00:08:59.667 Flush (00h): Supported LBA-Change 00:08:59.667 Write (01h): Supported LBA-Change 00:08:59.667 Read (02h): Supported 00:08:59.667 Compare (05h): Supported 00:08:59.667 Write Zeroes (08h): Supported LBA-Change 00:08:59.667 Dataset Management (09h): Supported LBA-Change 00:08:59.667 Unknown (0Ch): Supported 00:08:59.667 Unknown (12h): Supported 00:08:59.667 Copy (19h): Supported LBA-Change 00:08:59.667 Unknown (1Dh): Supported LBA-Change 00:08:59.667 00:08:59.667 Error Log 00:08:59.667 ========= 00:08:59.667 00:08:59.667 Arbitration 00:08:59.667 =========== 00:08:59.667 Arbitration Burst: no limit 00:08:59.667 00:08:59.667 Power Management 00:08:59.667 ================ 00:08:59.667 Number of Power States: 1 00:08:59.667 Current Power State: Power State #0 00:08:59.667 Power State #0: 00:08:59.667 Max Power: 25.00 W 00:08:59.667 Non-Operational State: Operational 00:08:59.667 Entry Latency: 16 microseconds 00:08:59.667 Exit Latency: 4 microseconds 00:08:59.667 Relative Read Throughput: 0 00:08:59.667 Relative Read Latency: 0 00:08:59.667 Relative Write Throughput: 0 00:08:59.667 Relative Write Latency: 0 00:08:59.667 Idle Power: Not Reported 00:08:59.667 Active Power: Not Reported 00:08:59.667 Non-Operational Permissive Mode: Not Supported 00:08:59.667 00:08:59.667 Health Information 00:08:59.667 ================== 00:08:59.667 Critical Warnings: 00:08:59.667 Available Spare Space: OK 00:08:59.667 Temperature: OK 00:08:59.667 Device Reliability: OK 00:08:59.667 Read Only: No 00:08:59.667 Volatile Memory Backup: OK 00:08:59.667 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.667 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.667 Available Spare: 0% 00:08:59.667 Available Spare Threshold: 0% 00:08:59.667 Life Percentage Used: 0% 00:08:59.667 Data Units Read: 846 00:08:59.667 Data Units Written: 775 00:08:59.667 Host Read Commands: 34109 00:08:59.667 Host Write Commands: 33533 00:08:59.667 Controller Busy Time: 0 minutes 00:08:59.667 Power Cycles: 0 00:08:59.667 Power On Hours: 0 hours 00:08:59.667 Unsafe Shutdowns: 0 00:08:59.667 Unrecoverable Media Errors: 0 00:08:59.667 Lifetime Error Log Entries: 0 00:08:59.667 Warning Temperature Time: 0 minutes 00:08:59.667 Critical Temperature Time: 0 minutes 00:08:59.667 00:08:59.667 Number of Queues 00:08:59.667 ================ 00:08:59.667 Number of I/O Submission Queues: 64 00:08:59.667 Number of I/O Completion Queues: 64 00:08:59.667 00:08:59.667 ZNS Specific Controller Data 00:08:59.667 ============================ 00:08:59.667 Zone Append Size Limit: 0 00:08:59.667 00:08:59.667 00:08:59.667 Active Namespaces 00:08:59.667 ================= 00:08:59.667 Namespace ID:1 00:08:59.667 Error Recovery Timeout: Unlimited 00:08:59.667 Command Set Identifier: NVM (00h) 00:08:59.667 Deallocate: Supported 00:08:59.667 Deallocated/Unwritten Error: Supported 00:08:59.667 Deallocated Read Value: All 0x00 00:08:59.667 Deallocate in Write Zeroes: Not Supported 00:08:59.667 Deallocated Guard Field: 0xFFFF 00:08:59.667 Flush: Supported 00:08:59.667 Reservation: Not Supported 00:08:59.667 Namespace Sharing Capabilities: Multiple Controllers 00:08:59.667 Size (in LBAs): 262144 (1GiB) 00:08:59.667 Capacity (in LBAs): 262144 (1GiB) 00:08:59.667 Utilization (in LBAs): 262144 (1GiB) 00:08:59.667 Thin Provisioning: Not Supported 00:08:59.667 Per-NS Atomic Units: No 00:08:59.667 Maximum Single Source Range Length: 128 00:08:59.667 Maximum Copy Length: 128 00:08:59.667 Maximum Source Range Count: 128 00:08:59.667 NGUID/EUI64 Never Reused: No 00:08:59.667 Namespace Write Protected: No 00:08:59.667 Endurance group ID: 1 00:08:59.667 Number of LBA Formats: 8 00:08:59.667 Current LBA Format: LBA Format #04 00:08:59.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.667 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.667 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.667 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.667 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.667 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.667 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.667 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.667 00:08:59.667 Get Feature FDP: 00:08:59.667 ================ 00:08:59.667 Enabled: Yes 00:08:59.667 FDP configuration index: 0 00:08:59.667 00:08:59.667 FDP configurations log page 00:08:59.667 =========================== 00:08:59.667 Number of FDP configurations: 1 00:08:59.667 Version: 0 00:08:59.667 Size: 112 00:08:59.667 FDP Configuration Descriptor: 0 00:08:59.667 Descriptor Size: 96 00:08:59.667 Reclaim Group Identifier format: 2 00:08:59.667 FDP Volatile Write Cache: Not Present 00:08:59.667 FDP Configuration: Valid 00:08:59.667 Vendor Specific Size: 0 00:08:59.667 Number of Reclaim Groups: 2 00:08:59.667 Number of Recalim Unit Handles: 8 00:08:59.667 Max Placement Identifiers: 128 00:08:59.667 Number of Namespaces Suppprted: 256 00:08:59.667 Reclaim unit Nominal Size: 6000000 bytes 00:08:59.667 Estimated Reclaim Unit Time Limit: Not Reported 00:08:59.667 RUH Desc #000: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #001: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #002: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #003: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #004: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #005: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #006: RUH Type: Initially Isolated 00:08:59.667 RUH Desc #007: RUH Type: Initially Isolated 00:08:59.667 00:08:59.667 FDP reclaim unit handle usage log page 00:08:59.926 ====================================== 00:08:59.926 Number of Reclaim Unit Handles: 8 00:08:59.926 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:59.926 RUH Usage Desc #001: RUH Attributes: Unused 00:08:59.926 RUH Usage Desc #002: RUH Attributes: Unused 00:08:59.926 RUH Usage Desc #003: RUH Attributes: Unused 00:08:59.926 RUH Usage Desc #004: RUH Attributes: Unused 00:08:59.926 RUH Usage Desc #005: RUH Attributes: Unused 00:08:59.926 RUH Usage Desc #006: RUH Attributes: Unused 00:08:59.926 RUH Usage Desc #007: RUH Attributes: Unused 00:08:59.926 00:08:59.926 FDP statistics log page 00:08:59.926 ======================= 00:08:59.926 Host bytes with metadata written: 505257984 00:08:59.926 Media bytes with metadata written: 505315328 00:08:59.926 Media bytes erased: 0 00:08:59.926 00:08:59.926 FDP events log page 00:08:59.926 =================== 00:08:59.926 Number of FDP events: 0 00:08:59.926 00:08:59.926 NVM Specific Namespace Data 00:08:59.926 =========================== 00:08:59.926 Logical Block Storage Tag Mask: 0 00:08:59.926 Protection Information Capabilities: 00:08:59.926 16b Guard Protection Information Storage Tag Support: No 00:08:59.926 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.926 Storage Tag Check Read Support: No 00:08:59.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.926 00:08:59.926 real 0m1.784s 00:08:59.926 user 0m0.646s 00:08:59.926 sys 0m0.909s 00:08:59.926 13:06:51 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.926 13:06:51 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 ************************************ 00:08:59.926 END TEST nvme_identify 00:08:59.926 ************************************ 00:08:59.926 13:06:51 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:59.926 13:06:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.926 13:06:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.926 13:06:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 ************************************ 00:08:59.926 START TEST nvme_perf 00:08:59.926 ************************************ 00:08:59.926 13:06:51 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:59.926 13:06:51 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:01.305 Initializing NVMe Controllers 00:09:01.305 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.305 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.305 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.305 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.305 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:01.305 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:01.305 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:01.305 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:01.305 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:01.305 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:01.305 Initialization complete. Launching workers. 00:09:01.305 ======================================================== 00:09:01.305 Latency(us) 00:09:01.305 Device Information : IOPS MiB/s Average min max 00:09:01.305 PCIE (0000:00:10.0) NSID 1 from core 0: 13970.02 163.71 9180.85 7627.01 51890.41 00:09:01.305 PCIE (0000:00:11.0) NSID 1 from core 0: 13970.02 163.71 9164.51 7756.56 49449.02 00:09:01.305 PCIE (0000:00:13.0) NSID 1 from core 0: 13970.02 163.71 9145.60 7727.60 47808.20 00:09:01.305 PCIE (0000:00:12.0) NSID 1 from core 0: 13970.02 163.71 9125.83 7761.22 45524.22 00:09:01.305 PCIE (0000:00:12.0) NSID 2 from core 0: 13970.02 163.71 9107.80 7751.51 43189.23 00:09:01.305 PCIE (0000:00:12.0) NSID 3 from core 0: 14033.81 164.46 9048.29 7781.02 35723.37 00:09:01.305 ======================================================== 00:09:01.305 Total : 83883.94 983.01 9128.75 7627.01 51890.41 00:09:01.305 00:09:01.305 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:01.305 ================================================================================= 00:09:01.305 1.00000% : 7895.904us 00:09:01.305 10.00000% : 8106.461us 00:09:01.305 25.00000% : 8369.658us 00:09:01.305 50.00000% : 8685.494us 00:09:01.305 75.00000% : 9001.330us 00:09:01.305 90.00000% : 9369.806us 00:09:01.305 95.00000% : 9948.839us 00:09:01.305 98.00000% : 16002.365us 00:09:01.305 99.00000% : 21055.743us 00:09:01.305 99.50000% : 44638.175us 00:09:01.305 99.90000% : 51586.570us 00:09:01.305 99.99000% : 52007.685us 00:09:01.305 99.99900% : 52007.685us 00:09:01.305 99.99990% : 52007.685us 00:09:01.305 99.99999% : 52007.685us 00:09:01.305 00:09:01.305 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:01.305 ================================================================================= 00:09:01.305 1.00000% : 7948.543us 00:09:01.305 10.00000% : 8211.740us 00:09:01.305 25.00000% : 8422.297us 00:09:01.305 50.00000% : 8685.494us 00:09:01.305 75.00000% : 8948.691us 00:09:01.305 90.00000% : 9317.166us 00:09:01.305 95.00000% : 10106.757us 00:09:01.305 98.00000% : 16423.480us 00:09:01.305 99.00000% : 20424.071us 00:09:01.305 99.50000% : 42743.158us 00:09:01.305 99.90000% : 49059.881us 00:09:01.305 99.99000% : 49480.996us 00:09:01.305 99.99900% : 49480.996us 00:09:01.305 99.99990% : 49480.996us 00:09:01.305 99.99999% : 49480.996us 00:09:01.305 00:09:01.305 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:01.305 ================================================================================= 00:09:01.305 1.00000% : 7948.543us 00:09:01.305 10.00000% : 8211.740us 00:09:01.305 25.00000% : 8422.297us 00:09:01.305 50.00000% : 8685.494us 00:09:01.305 75.00000% : 8948.691us 00:09:01.305 90.00000% : 9264.527us 00:09:01.305 95.00000% : 10001.478us 00:09:01.305 98.00000% : 15475.971us 00:09:01.305 99.00000% : 19687.120us 00:09:01.305 99.50000% : 41058.699us 00:09:01.305 99.90000% : 47375.422us 00:09:01.305 99.99000% : 47796.537us 00:09:01.305 99.99900% : 48007.094us 00:09:01.305 99.99990% : 48007.094us 00:09:01.305 99.99999% : 48007.094us 00:09:01.305 00:09:01.305 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:01.305 ================================================================================= 00:09:01.305 1.00000% : 7948.543us 00:09:01.305 10.00000% : 8211.740us 00:09:01.305 25.00000% : 8422.297us 00:09:01.305 50.00000% : 8685.494us 00:09:01.305 75.00000% : 8948.691us 00:09:01.305 90.00000% : 9264.527us 00:09:01.305 95.00000% : 10054.117us 00:09:01.305 98.00000% : 14949.578us 00:09:01.305 99.00000% : 19476.562us 00:09:01.305 99.50000% : 38532.010us 00:09:01.305 99.90000% : 45269.847us 00:09:01.305 99.99000% : 45690.962us 00:09:01.305 99.99900% : 45690.962us 00:09:01.305 99.99990% : 45690.962us 00:09:01.305 99.99999% : 45690.962us 00:09:01.305 00:09:01.305 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:01.305 ================================================================================= 00:09:01.305 1.00000% : 7948.543us 00:09:01.305 10.00000% : 8211.740us 00:09:01.305 25.00000% : 8422.297us 00:09:01.305 50.00000% : 8685.494us 00:09:01.305 75.00000% : 8948.691us 00:09:01.305 90.00000% : 9264.527us 00:09:01.305 95.00000% : 9948.839us 00:09:01.305 98.00000% : 14528.463us 00:09:01.305 99.00000% : 19897.677us 00:09:01.305 99.50000% : 36426.435us 00:09:01.305 99.90000% : 42953.716us 00:09:01.305 99.99000% : 43164.273us 00:09:01.305 99.99900% : 43374.831us 00:09:01.305 99.99990% : 43374.831us 00:09:01.305 99.99999% : 43374.831us 00:09:01.305 00:09:01.305 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:01.305 ================================================================================= 00:09:01.305 1.00000% : 7948.543us 00:09:01.305 10.00000% : 8211.740us 00:09:01.305 25.00000% : 8422.297us 00:09:01.305 50.00000% : 8685.494us 00:09:01.305 75.00000% : 8948.691us 00:09:01.305 90.00000% : 9317.166us 00:09:01.305 95.00000% : 10159.396us 00:09:01.305 98.00000% : 14844.299us 00:09:01.305 99.00000% : 20529.349us 00:09:01.305 99.50000% : 28846.368us 00:09:01.305 99.90000% : 35373.648us 00:09:01.305 99.99000% : 35794.763us 00:09:01.305 99.99900% : 35794.763us 00:09:01.305 99.99990% : 35794.763us 00:09:01.305 99.99999% : 35794.763us 00:09:01.305 00:09:01.305 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:01.305 ============================================================================== 00:09:01.305 Range in us Cumulative IO count 00:09:01.305 7580.067 - 7632.707: 0.0071% ( 1) 00:09:01.305 7632.707 - 7685.346: 0.0357% ( 4) 00:09:01.305 7685.346 - 7737.986: 0.1284% ( 13) 00:09:01.305 7737.986 - 7790.625: 0.3567% ( 32) 00:09:01.305 7790.625 - 7843.264: 0.8134% ( 64) 00:09:01.305 7843.264 - 7895.904: 1.7337% ( 129) 00:09:01.305 7895.904 - 7948.543: 3.2463% ( 212) 00:09:01.305 7948.543 - 8001.182: 5.3082% ( 289) 00:09:01.305 8001.182 - 8053.822: 7.5200% ( 310) 00:09:01.305 8053.822 - 8106.461: 10.2098% ( 377) 00:09:01.305 8106.461 - 8159.100: 12.8853% ( 375) 00:09:01.305 8159.100 - 8211.740: 16.0388% ( 442) 00:09:01.305 8211.740 - 8264.379: 19.3422% ( 463) 00:09:01.305 8264.379 - 8317.018: 22.9167% ( 501) 00:09:01.305 8317.018 - 8369.658: 26.6053% ( 517) 00:09:01.305 8369.658 - 8422.297: 30.7648% ( 583) 00:09:01.305 8422.297 - 8474.937: 34.9672% ( 589) 00:09:01.305 8474.937 - 8527.576: 39.2551% ( 601) 00:09:01.305 8527.576 - 8580.215: 43.5574% ( 603) 00:09:01.305 8580.215 - 8632.855: 47.9880% ( 621) 00:09:01.305 8632.855 - 8685.494: 52.3116% ( 606) 00:09:01.305 8685.494 - 8738.133: 56.8208% ( 632) 00:09:01.305 8738.133 - 8790.773: 61.2443% ( 620) 00:09:01.305 8790.773 - 8843.412: 65.5751% ( 607) 00:09:01.305 8843.412 - 8896.051: 69.8131% ( 594) 00:09:01.305 8896.051 - 8948.691: 73.7657% ( 554) 00:09:01.305 8948.691 - 9001.330: 77.3116% ( 497) 00:09:01.305 9001.330 - 9053.969: 80.2797% ( 416) 00:09:01.305 9053.969 - 9106.609: 82.7911% ( 352) 00:09:01.305 9106.609 - 9159.248: 85.0243% ( 313) 00:09:01.305 9159.248 - 9211.888: 86.9292% ( 267) 00:09:01.305 9211.888 - 9264.527: 88.5559% ( 228) 00:09:01.305 9264.527 - 9317.166: 89.9329% ( 193) 00:09:01.305 9317.166 - 9369.806: 91.1744% ( 174) 00:09:01.305 9369.806 - 9422.445: 92.2160% ( 146) 00:09:01.305 9422.445 - 9475.084: 92.9438% ( 102) 00:09:01.305 9475.084 - 9527.724: 93.5217% ( 81) 00:09:01.306 9527.724 - 9580.363: 93.9070% ( 54) 00:09:01.306 9580.363 - 9633.002: 94.2280% ( 45) 00:09:01.306 9633.002 - 9685.642: 94.4492% ( 31) 00:09:01.306 9685.642 - 9738.281: 94.6276% ( 25) 00:09:01.306 9738.281 - 9790.920: 94.8059% ( 25) 00:09:01.306 9790.920 - 9843.560: 94.8987% ( 13) 00:09:01.306 9843.560 - 9896.199: 94.9772% ( 11) 00:09:01.306 9896.199 - 9948.839: 95.0485% ( 10) 00:09:01.306 9948.839 - 10001.478: 95.1199% ( 10) 00:09:01.306 10001.478 - 10054.117: 95.1983% ( 11) 00:09:01.306 10054.117 - 10106.757: 95.2840% ( 12) 00:09:01.306 10106.757 - 10159.396: 95.3838% ( 14) 00:09:01.306 10159.396 - 10212.035: 95.4695% ( 12) 00:09:01.306 10212.035 - 10264.675: 95.5693% ( 14) 00:09:01.306 10264.675 - 10317.314: 95.6835% ( 16) 00:09:01.306 10317.314 - 10369.953: 95.7905% ( 15) 00:09:01.306 10369.953 - 10422.593: 95.9047% ( 16) 00:09:01.306 10422.593 - 10475.232: 96.0402% ( 19) 00:09:01.306 10475.232 - 10527.871: 96.1187% ( 11) 00:09:01.306 10527.871 - 10580.511: 96.1829% ( 9) 00:09:01.306 10580.511 - 10633.150: 96.2543% ( 10) 00:09:01.306 10633.150 - 10685.790: 96.2971% ( 6) 00:09:01.306 10685.790 - 10738.429: 96.3613% ( 9) 00:09:01.306 10738.429 - 10791.068: 96.4112% ( 7) 00:09:01.306 10791.068 - 10843.708: 96.4612% ( 7) 00:09:01.306 10843.708 - 10896.347: 96.5111% ( 7) 00:09:01.306 10896.347 - 10948.986: 96.5682% ( 8) 00:09:01.306 10948.986 - 11001.626: 96.6110% ( 6) 00:09:01.306 11001.626 - 11054.265: 96.6752% ( 9) 00:09:01.306 11054.265 - 11106.904: 96.7323% ( 8) 00:09:01.306 11106.904 - 11159.544: 96.7751% ( 6) 00:09:01.306 11159.544 - 11212.183: 96.8179% ( 6) 00:09:01.306 11212.183 - 11264.822: 96.8821% ( 9) 00:09:01.306 11264.822 - 11317.462: 96.9249% ( 6) 00:09:01.306 11317.462 - 11370.101: 96.9820% ( 8) 00:09:01.306 11370.101 - 11422.741: 97.0320% ( 7) 00:09:01.306 11422.741 - 11475.380: 97.0890% ( 8) 00:09:01.306 11475.380 - 11528.019: 97.1176% ( 4) 00:09:01.306 11528.019 - 11580.659: 97.1461% ( 4) 00:09:01.306 11580.659 - 11633.298: 97.1818% ( 5) 00:09:01.306 11633.298 - 11685.937: 97.2032% ( 3) 00:09:01.306 11685.937 - 11738.577: 97.2175% ( 2) 00:09:01.306 11738.577 - 11791.216: 97.2317% ( 2) 00:09:01.306 11791.216 - 11843.855: 97.2460% ( 2) 00:09:01.306 11843.855 - 11896.495: 97.2531% ( 1) 00:09:01.306 11896.495 - 11949.134: 97.2603% ( 1) 00:09:01.306 12791.364 - 12844.003: 97.2745% ( 2) 00:09:01.306 12844.003 - 12896.643: 97.2888% ( 2) 00:09:01.306 12896.643 - 12949.282: 97.3031% ( 2) 00:09:01.306 12949.282 - 13001.921: 97.3174% ( 2) 00:09:01.306 13001.921 - 13054.561: 97.3245% ( 1) 00:09:01.306 13054.561 - 13107.200: 97.3459% ( 3) 00:09:01.306 13107.200 - 13159.839: 97.3602% ( 2) 00:09:01.306 13159.839 - 13212.479: 97.3744% ( 2) 00:09:01.306 13212.479 - 13265.118: 97.3816% ( 1) 00:09:01.306 13265.118 - 13317.757: 97.4030% ( 3) 00:09:01.306 13317.757 - 13370.397: 97.4172% ( 2) 00:09:01.306 13370.397 - 13423.036: 97.4315% ( 2) 00:09:01.306 13423.036 - 13475.676: 97.4458% ( 2) 00:09:01.306 13475.676 - 13580.954: 97.4743% ( 4) 00:09:01.306 13580.954 - 13686.233: 97.5029% ( 4) 00:09:01.306 13686.233 - 13791.512: 97.5243% ( 3) 00:09:01.306 13791.512 - 13896.790: 97.5528% ( 4) 00:09:01.306 13896.790 - 14002.069: 97.5742% ( 3) 00:09:01.306 14002.069 - 14107.348: 97.6027% ( 4) 00:09:01.306 14107.348 - 14212.627: 97.6313% ( 4) 00:09:01.306 14212.627 - 14317.905: 97.6670% ( 5) 00:09:01.306 14317.905 - 14423.184: 97.6884% ( 3) 00:09:01.306 14423.184 - 14528.463: 97.7026% ( 2) 00:09:01.306 14528.463 - 14633.741: 97.7169% ( 2) 00:09:01.306 15160.135 - 15265.414: 97.7312% ( 2) 00:09:01.306 15265.414 - 15370.692: 97.7740% ( 6) 00:09:01.306 15370.692 - 15475.971: 97.8168% ( 6) 00:09:01.306 15475.971 - 15581.250: 97.8596% ( 6) 00:09:01.306 15581.250 - 15686.529: 97.9024% ( 6) 00:09:01.306 15686.529 - 15791.807: 97.9381% ( 5) 00:09:01.306 15791.807 - 15897.086: 97.9880% ( 7) 00:09:01.306 15897.086 - 16002.365: 98.0237% ( 5) 00:09:01.306 16002.365 - 16107.643: 98.0594% ( 5) 00:09:01.306 16107.643 - 16212.922: 98.1022% ( 6) 00:09:01.306 16212.922 - 16318.201: 98.1592% ( 8) 00:09:01.306 16318.201 - 16423.480: 98.2235% ( 9) 00:09:01.306 16423.480 - 16528.758: 98.2449% ( 3) 00:09:01.306 16528.758 - 16634.037: 98.2734% ( 4) 00:09:01.306 16634.037 - 16739.316: 98.3091% ( 5) 00:09:01.306 16739.316 - 16844.594: 98.3447% ( 5) 00:09:01.306 16844.594 - 16949.873: 98.3733% ( 4) 00:09:01.306 16949.873 - 17055.152: 98.4018% ( 4) 00:09:01.306 17055.152 - 17160.431: 98.4375% ( 5) 00:09:01.306 17160.431 - 17265.709: 98.4732% ( 5) 00:09:01.306 17265.709 - 17370.988: 98.5017% ( 4) 00:09:01.306 17370.988 - 17476.267: 98.5374% ( 5) 00:09:01.306 17476.267 - 17581.545: 98.5659% ( 4) 00:09:01.306 17581.545 - 17686.824: 98.5945% ( 4) 00:09:01.306 17686.824 - 17792.103: 98.6301% ( 5) 00:09:01.306 19792.398 - 19897.677: 98.6444% ( 2) 00:09:01.306 19897.677 - 20002.956: 98.6729% ( 4) 00:09:01.306 20002.956 - 20108.235: 98.7158% ( 6) 00:09:01.306 20108.235 - 20213.513: 98.7514% ( 5) 00:09:01.306 20213.513 - 20318.792: 98.7800% ( 4) 00:09:01.306 20318.792 - 20424.071: 98.8085% ( 4) 00:09:01.306 20424.071 - 20529.349: 98.8513% ( 6) 00:09:01.306 20529.349 - 20634.628: 98.8870% ( 5) 00:09:01.306 20634.628 - 20739.907: 98.9155% ( 4) 00:09:01.306 20739.907 - 20845.186: 98.9512% ( 5) 00:09:01.306 20845.186 - 20950.464: 98.9869% ( 5) 00:09:01.306 20950.464 - 21055.743: 99.0297% ( 6) 00:09:01.306 21055.743 - 21161.022: 99.0582% ( 4) 00:09:01.306 21161.022 - 21266.300: 99.0868% ( 4) 00:09:01.306 42532.601 - 42743.158: 99.1010% ( 2) 00:09:01.306 42743.158 - 42953.716: 99.1510% ( 7) 00:09:01.306 42953.716 - 43164.273: 99.2009% ( 7) 00:09:01.306 43164.273 - 43374.831: 99.2437% ( 6) 00:09:01.306 43374.831 - 43585.388: 99.2937% ( 7) 00:09:01.306 43585.388 - 43795.945: 99.3436% ( 7) 00:09:01.306 43795.945 - 44006.503: 99.3936% ( 7) 00:09:01.306 44006.503 - 44217.060: 99.4364% ( 6) 00:09:01.306 44217.060 - 44427.618: 99.4792% ( 6) 00:09:01.306 44427.618 - 44638.175: 99.5291% ( 7) 00:09:01.306 44638.175 - 44848.733: 99.5434% ( 2) 00:09:01.306 49691.553 - 49902.111: 99.5791% ( 5) 00:09:01.306 49902.111 - 50112.668: 99.6290% ( 7) 00:09:01.306 50112.668 - 50323.226: 99.6718% ( 6) 00:09:01.306 50323.226 - 50533.783: 99.7217% ( 7) 00:09:01.306 50533.783 - 50744.341: 99.7574% ( 5) 00:09:01.306 50744.341 - 50954.898: 99.8002% ( 6) 00:09:01.306 50954.898 - 51165.455: 99.8430% ( 6) 00:09:01.306 51165.455 - 51376.013: 99.8858% ( 6) 00:09:01.306 51376.013 - 51586.570: 99.9429% ( 8) 00:09:01.306 51586.570 - 51797.128: 99.9857% ( 6) 00:09:01.306 51797.128 - 52007.685: 100.0000% ( 2) 00:09:01.306 00:09:01.306 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:01.306 ============================================================================== 00:09:01.306 Range in us Cumulative IO count 00:09:01.306 7737.986 - 7790.625: 0.0428% ( 6) 00:09:01.306 7790.625 - 7843.264: 0.1070% ( 9) 00:09:01.306 7843.264 - 7895.904: 0.4138% ( 43) 00:09:01.306 7895.904 - 7948.543: 1.0845% ( 94) 00:09:01.306 7948.543 - 8001.182: 2.5756% ( 209) 00:09:01.306 8001.182 - 8053.822: 4.7731% ( 308) 00:09:01.306 8053.822 - 8106.461: 7.1846% ( 338) 00:09:01.306 8106.461 - 8159.100: 9.8388% ( 372) 00:09:01.306 8159.100 - 8211.740: 13.0922% ( 456) 00:09:01.306 8211.740 - 8264.379: 16.5882% ( 490) 00:09:01.306 8264.379 - 8317.018: 20.3981% ( 534) 00:09:01.306 8317.018 - 8369.658: 24.4649% ( 570) 00:09:01.306 8369.658 - 8422.297: 28.6958% ( 593) 00:09:01.306 8422.297 - 8474.937: 33.1835% ( 629) 00:09:01.306 8474.937 - 8527.576: 38.1564% ( 697) 00:09:01.306 8527.576 - 8580.215: 43.0508% ( 686) 00:09:01.306 8580.215 - 8632.855: 48.0808% ( 705) 00:09:01.306 8632.855 - 8685.494: 53.1821% ( 715) 00:09:01.306 8685.494 - 8738.133: 58.3262% ( 721) 00:09:01.306 8738.133 - 8790.773: 63.3776% ( 708) 00:09:01.306 8790.773 - 8843.412: 68.1721% ( 672) 00:09:01.306 8843.412 - 8896.051: 72.5956% ( 620) 00:09:01.306 8896.051 - 8948.691: 76.2129% ( 507) 00:09:01.306 8948.691 - 9001.330: 79.4092% ( 448) 00:09:01.306 9001.330 - 9053.969: 82.1704% ( 387) 00:09:01.306 9053.969 - 9106.609: 84.5534% ( 334) 00:09:01.306 9106.609 - 9159.248: 86.5939% ( 286) 00:09:01.306 9159.248 - 9211.888: 88.3419% ( 245) 00:09:01.306 9211.888 - 9264.527: 89.8616% ( 213) 00:09:01.306 9264.527 - 9317.166: 91.0745% ( 170) 00:09:01.306 9317.166 - 9369.806: 91.9735% ( 126) 00:09:01.306 9369.806 - 9422.445: 92.5942% ( 87) 00:09:01.306 9422.445 - 9475.084: 93.0365% ( 62) 00:09:01.306 9475.084 - 9527.724: 93.4004% ( 51) 00:09:01.306 9527.724 - 9580.363: 93.7215% ( 45) 00:09:01.306 9580.363 - 9633.002: 93.9212% ( 28) 00:09:01.306 9633.002 - 9685.642: 94.0568% ( 19) 00:09:01.306 9685.642 - 9738.281: 94.1995% ( 20) 00:09:01.306 9738.281 - 9790.920: 94.3279% ( 18) 00:09:01.306 9790.920 - 9843.560: 94.4563% ( 18) 00:09:01.306 9843.560 - 9896.199: 94.5990% ( 20) 00:09:01.306 9896.199 - 9948.839: 94.7275% ( 18) 00:09:01.306 9948.839 - 10001.478: 94.8701% ( 20) 00:09:01.306 10001.478 - 10054.117: 94.9986% ( 18) 00:09:01.306 10054.117 - 10106.757: 95.1270% ( 18) 00:09:01.306 10106.757 - 10159.396: 95.2697% ( 20) 00:09:01.306 10159.396 - 10212.035: 95.4409% ( 24) 00:09:01.306 10212.035 - 10264.675: 95.6050% ( 23) 00:09:01.306 10264.675 - 10317.314: 95.7549% ( 21) 00:09:01.306 10317.314 - 10369.953: 95.8833% ( 18) 00:09:01.306 10369.953 - 10422.593: 95.9832% ( 14) 00:09:01.306 10422.593 - 10475.232: 96.0616% ( 11) 00:09:01.306 10475.232 - 10527.871: 96.1330% ( 10) 00:09:01.306 10527.871 - 10580.511: 96.2043% ( 10) 00:09:01.306 10580.511 - 10633.150: 96.2828% ( 11) 00:09:01.306 10633.150 - 10685.790: 96.3684% ( 12) 00:09:01.306 10685.790 - 10738.429: 96.4184% ( 7) 00:09:01.306 10738.429 - 10791.068: 96.4755% ( 8) 00:09:01.306 10791.068 - 10843.708: 96.5254% ( 7) 00:09:01.306 10843.708 - 10896.347: 96.5896% ( 9) 00:09:01.307 10896.347 - 10948.986: 96.6396% ( 7) 00:09:01.307 10948.986 - 11001.626: 96.7038% ( 9) 00:09:01.307 11001.626 - 11054.265: 96.7680% ( 9) 00:09:01.307 11054.265 - 11106.904: 96.8108% ( 6) 00:09:01.307 11106.904 - 11159.544: 96.8536% ( 6) 00:09:01.307 11159.544 - 11212.183: 96.8821% ( 4) 00:09:01.307 11212.183 - 11264.822: 96.9178% ( 5) 00:09:01.307 11264.822 - 11317.462: 96.9463% ( 4) 00:09:01.307 11317.462 - 11370.101: 96.9820% ( 5) 00:09:01.307 11370.101 - 11422.741: 97.0177% ( 5) 00:09:01.307 11422.741 - 11475.380: 97.0462% ( 4) 00:09:01.307 11475.380 - 11528.019: 97.0819% ( 5) 00:09:01.307 11528.019 - 11580.659: 97.1104% ( 4) 00:09:01.307 11580.659 - 11633.298: 97.1318% ( 3) 00:09:01.307 11633.298 - 11685.937: 97.1533% ( 3) 00:09:01.307 11685.937 - 11738.577: 97.1675% ( 2) 00:09:01.307 11738.577 - 11791.216: 97.1818% ( 2) 00:09:01.307 11791.216 - 11843.855: 97.1961% ( 2) 00:09:01.307 11843.855 - 11896.495: 97.2103% ( 2) 00:09:01.307 11896.495 - 11949.134: 97.2246% ( 2) 00:09:01.307 11949.134 - 12001.773: 97.2460% ( 3) 00:09:01.307 12001.773 - 12054.413: 97.2603% ( 2) 00:09:01.307 12580.806 - 12633.446: 97.2745% ( 2) 00:09:01.307 12633.446 - 12686.085: 97.2959% ( 3) 00:09:01.307 12686.085 - 12738.724: 97.3102% ( 2) 00:09:01.307 12738.724 - 12791.364: 97.3245% ( 2) 00:09:01.307 12791.364 - 12844.003: 97.3459% ( 3) 00:09:01.307 12844.003 - 12896.643: 97.3602% ( 2) 00:09:01.307 12896.643 - 12949.282: 97.3744% ( 2) 00:09:01.307 12949.282 - 13001.921: 97.3958% ( 3) 00:09:01.307 13001.921 - 13054.561: 97.4101% ( 2) 00:09:01.307 13054.561 - 13107.200: 97.4315% ( 3) 00:09:01.307 13107.200 - 13159.839: 97.4458% ( 2) 00:09:01.307 13159.839 - 13212.479: 97.4600% ( 2) 00:09:01.307 13212.479 - 13265.118: 97.4814% ( 3) 00:09:01.307 13265.118 - 13317.757: 97.4957% ( 2) 00:09:01.307 13317.757 - 13370.397: 97.5100% ( 2) 00:09:01.307 13370.397 - 13423.036: 97.5314% ( 3) 00:09:01.307 13423.036 - 13475.676: 97.5457% ( 2) 00:09:01.307 13475.676 - 13580.954: 97.5813% ( 5) 00:09:01.307 13580.954 - 13686.233: 97.6099% ( 4) 00:09:01.307 13686.233 - 13791.512: 97.6455% ( 5) 00:09:01.307 13791.512 - 13896.790: 97.6741% ( 4) 00:09:01.307 13896.790 - 14002.069: 97.7026% ( 4) 00:09:01.307 14002.069 - 14107.348: 97.7169% ( 2) 00:09:01.307 15791.807 - 15897.086: 97.7526% ( 5) 00:09:01.307 15897.086 - 16002.365: 97.7882% ( 5) 00:09:01.307 16002.365 - 16107.643: 97.8311% ( 6) 00:09:01.307 16107.643 - 16212.922: 97.8953% ( 9) 00:09:01.307 16212.922 - 16318.201: 97.9666% ( 10) 00:09:01.307 16318.201 - 16423.480: 98.0522% ( 12) 00:09:01.307 16423.480 - 16528.758: 98.1378% ( 12) 00:09:01.307 16528.758 - 16634.037: 98.2235% ( 12) 00:09:01.307 16634.037 - 16739.316: 98.3091% ( 12) 00:09:01.307 16739.316 - 16844.594: 98.3876% ( 11) 00:09:01.307 16844.594 - 16949.873: 98.4732% ( 12) 00:09:01.307 16949.873 - 17055.152: 98.5517% ( 11) 00:09:01.307 17055.152 - 17160.431: 98.6230% ( 10) 00:09:01.307 17160.431 - 17265.709: 98.6301% ( 1) 00:09:01.307 19371.284 - 19476.562: 98.6373% ( 1) 00:09:01.307 19476.562 - 19581.841: 98.6729% ( 5) 00:09:01.307 19581.841 - 19687.120: 98.7086% ( 5) 00:09:01.307 19687.120 - 19792.398: 98.7514% ( 6) 00:09:01.307 19792.398 - 19897.677: 98.7942% ( 6) 00:09:01.307 19897.677 - 20002.956: 98.8299% ( 5) 00:09:01.307 20002.956 - 20108.235: 98.8727% ( 6) 00:09:01.307 20108.235 - 20213.513: 98.9155% ( 6) 00:09:01.307 20213.513 - 20318.792: 98.9583% ( 6) 00:09:01.307 20318.792 - 20424.071: 99.0011% ( 6) 00:09:01.307 20424.071 - 20529.349: 99.0439% ( 6) 00:09:01.307 20529.349 - 20634.628: 99.0868% ( 6) 00:09:01.307 40637.584 - 40848.141: 99.1082% ( 3) 00:09:01.307 40848.141 - 41058.699: 99.1581% ( 7) 00:09:01.307 41058.699 - 41269.256: 99.2080% ( 7) 00:09:01.307 41269.256 - 41479.814: 99.2580% ( 7) 00:09:01.307 41479.814 - 41690.371: 99.3008% ( 6) 00:09:01.307 41690.371 - 41900.929: 99.3507% ( 7) 00:09:01.307 41900.929 - 42111.486: 99.3864% ( 5) 00:09:01.307 42111.486 - 42322.043: 99.4292% ( 6) 00:09:01.307 42322.043 - 42532.601: 99.4792% ( 7) 00:09:01.307 42532.601 - 42743.158: 99.5291% ( 7) 00:09:01.307 42743.158 - 42953.716: 99.5434% ( 2) 00:09:01.307 47375.422 - 47585.979: 99.5576% ( 2) 00:09:01.307 47585.979 - 47796.537: 99.6076% ( 7) 00:09:01.307 47796.537 - 48007.094: 99.6504% ( 6) 00:09:01.307 48007.094 - 48217.651: 99.7003% ( 7) 00:09:01.307 48217.651 - 48428.209: 99.7503% ( 7) 00:09:01.307 48428.209 - 48638.766: 99.8002% ( 7) 00:09:01.307 48638.766 - 48849.324: 99.8573% ( 8) 00:09:01.307 48849.324 - 49059.881: 99.9072% ( 7) 00:09:01.307 49059.881 - 49270.439: 99.9572% ( 7) 00:09:01.307 49270.439 - 49480.996: 100.0000% ( 6) 00:09:01.307 00:09:01.307 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:01.307 ============================================================================== 00:09:01.307 Range in us Cumulative IO count 00:09:01.307 7685.346 - 7737.986: 0.0143% ( 2) 00:09:01.307 7737.986 - 7790.625: 0.0499% ( 5) 00:09:01.307 7790.625 - 7843.264: 0.1926% ( 20) 00:09:01.307 7843.264 - 7895.904: 0.4638% ( 38) 00:09:01.307 7895.904 - 7948.543: 1.1273% ( 93) 00:09:01.307 7948.543 - 8001.182: 2.7041% ( 221) 00:09:01.307 8001.182 - 8053.822: 4.9515% ( 315) 00:09:01.307 8053.822 - 8106.461: 7.2132% ( 317) 00:09:01.307 8106.461 - 8159.100: 9.9814% ( 388) 00:09:01.307 8159.100 - 8211.740: 13.0850% ( 435) 00:09:01.307 8211.740 - 8264.379: 16.4883% ( 477) 00:09:01.307 8264.379 - 8317.018: 20.3339% ( 539) 00:09:01.307 8317.018 - 8369.658: 24.3008% ( 556) 00:09:01.307 8369.658 - 8422.297: 28.4817% ( 586) 00:09:01.307 8422.297 - 8474.937: 33.1835% ( 659) 00:09:01.307 8474.937 - 8527.576: 38.0922% ( 688) 00:09:01.307 8527.576 - 8580.215: 43.0080% ( 689) 00:09:01.307 8580.215 - 8632.855: 48.1164% ( 716) 00:09:01.307 8632.855 - 8685.494: 53.2677% ( 722) 00:09:01.307 8685.494 - 8738.133: 58.5188% ( 736) 00:09:01.307 8738.133 - 8790.773: 63.5845% ( 710) 00:09:01.307 8790.773 - 8843.412: 68.4717% ( 685) 00:09:01.307 8843.412 - 8896.051: 72.9167% ( 623) 00:09:01.307 8896.051 - 8948.691: 76.6053% ( 517) 00:09:01.307 8948.691 - 9001.330: 79.7874% ( 446) 00:09:01.307 9001.330 - 9053.969: 82.4344% ( 371) 00:09:01.307 9053.969 - 9106.609: 84.7959% ( 331) 00:09:01.307 9106.609 - 9159.248: 86.7865% ( 279) 00:09:01.307 9159.248 - 9211.888: 88.6059% ( 255) 00:09:01.307 9211.888 - 9264.527: 90.2183% ( 226) 00:09:01.307 9264.527 - 9317.166: 91.4598% ( 174) 00:09:01.307 9317.166 - 9369.806: 92.3659% ( 127) 00:09:01.307 9369.806 - 9422.445: 93.0080% ( 90) 00:09:01.307 9422.445 - 9475.084: 93.4575% ( 63) 00:09:01.307 9475.084 - 9527.724: 93.7500% ( 41) 00:09:01.307 9527.724 - 9580.363: 93.9783% ( 32) 00:09:01.307 9580.363 - 9633.002: 94.1067% ( 18) 00:09:01.307 9633.002 - 9685.642: 94.2423% ( 19) 00:09:01.307 9685.642 - 9738.281: 94.3564% ( 16) 00:09:01.307 9738.281 - 9790.920: 94.4706% ( 16) 00:09:01.307 9790.920 - 9843.560: 94.5919% ( 17) 00:09:01.307 9843.560 - 9896.199: 94.7203% ( 18) 00:09:01.307 9896.199 - 9948.839: 94.8630% ( 20) 00:09:01.307 9948.839 - 10001.478: 95.0271% ( 23) 00:09:01.307 10001.478 - 10054.117: 95.1769% ( 21) 00:09:01.307 10054.117 - 10106.757: 95.2911% ( 16) 00:09:01.307 10106.757 - 10159.396: 95.3981% ( 15) 00:09:01.307 10159.396 - 10212.035: 95.4980% ( 14) 00:09:01.307 10212.035 - 10264.675: 95.5979% ( 14) 00:09:01.307 10264.675 - 10317.314: 95.6906% ( 13) 00:09:01.307 10317.314 - 10369.953: 95.7620% ( 10) 00:09:01.307 10369.953 - 10422.593: 95.8547% ( 13) 00:09:01.307 10422.593 - 10475.232: 95.9047% ( 7) 00:09:01.307 10475.232 - 10527.871: 95.9475% ( 6) 00:09:01.307 10527.871 - 10580.511: 96.0117% ( 9) 00:09:01.307 10580.511 - 10633.150: 96.0688% ( 8) 00:09:01.307 10633.150 - 10685.790: 96.1401% ( 10) 00:09:01.307 10685.790 - 10738.429: 96.1829% ( 6) 00:09:01.307 10738.429 - 10791.068: 96.2400% ( 8) 00:09:01.307 10791.068 - 10843.708: 96.2757% ( 5) 00:09:01.307 10843.708 - 10896.347: 96.3256% ( 7) 00:09:01.307 10896.347 - 10948.986: 96.3542% ( 4) 00:09:01.307 10948.986 - 11001.626: 96.4041% ( 7) 00:09:01.307 11001.626 - 11054.265: 96.4326% ( 4) 00:09:01.307 11054.265 - 11106.904: 96.4683% ( 5) 00:09:01.307 11106.904 - 11159.544: 96.4969% ( 4) 00:09:01.307 11159.544 - 11212.183: 96.5397% ( 6) 00:09:01.307 11212.183 - 11264.822: 96.5682% ( 4) 00:09:01.307 11264.822 - 11317.462: 96.5967% ( 4) 00:09:01.307 11317.462 - 11370.101: 96.6182% ( 3) 00:09:01.307 11370.101 - 11422.741: 96.6396% ( 3) 00:09:01.307 11422.741 - 11475.380: 96.6681% ( 4) 00:09:01.307 11475.380 - 11528.019: 96.7109% ( 6) 00:09:01.307 11528.019 - 11580.659: 96.7537% ( 6) 00:09:01.307 11580.659 - 11633.298: 96.7965% ( 6) 00:09:01.307 11633.298 - 11685.937: 96.8465% ( 7) 00:09:01.307 11685.937 - 11738.577: 96.8893% ( 6) 00:09:01.307 11738.577 - 11791.216: 96.9321% ( 6) 00:09:01.307 11791.216 - 11843.855: 96.9749% ( 6) 00:09:01.307 11843.855 - 11896.495: 97.0106% ( 5) 00:09:01.307 11896.495 - 11949.134: 97.0462% ( 5) 00:09:01.307 11949.134 - 12001.773: 97.0676% ( 3) 00:09:01.307 12001.773 - 12054.413: 97.0962% ( 4) 00:09:01.307 12054.413 - 12107.052: 97.1390% ( 6) 00:09:01.307 12107.052 - 12159.692: 97.1747% ( 5) 00:09:01.307 12159.692 - 12212.331: 97.2103% ( 5) 00:09:01.307 12212.331 - 12264.970: 97.2603% ( 7) 00:09:01.307 12264.970 - 12317.610: 97.3102% ( 7) 00:09:01.307 12317.610 - 12370.249: 97.3530% ( 6) 00:09:01.307 12370.249 - 12422.888: 97.3816% ( 4) 00:09:01.307 12422.888 - 12475.528: 97.3958% ( 2) 00:09:01.307 12475.528 - 12528.167: 97.4101% ( 2) 00:09:01.307 12528.167 - 12580.806: 97.4244% ( 2) 00:09:01.307 12580.806 - 12633.446: 97.4458% ( 3) 00:09:01.307 12633.446 - 12686.085: 97.4600% ( 2) 00:09:01.307 12686.085 - 12738.724: 97.4814% ( 3) 00:09:01.307 12738.724 - 12791.364: 97.5029% ( 3) 00:09:01.307 12791.364 - 12844.003: 97.5171% ( 2) 00:09:01.308 12844.003 - 12896.643: 97.5314% ( 2) 00:09:01.308 12896.643 - 12949.282: 97.5528% ( 3) 00:09:01.308 12949.282 - 13001.921: 97.5742% ( 3) 00:09:01.308 13001.921 - 13054.561: 97.5885% ( 2) 00:09:01.308 13054.561 - 13107.200: 97.6099% ( 3) 00:09:01.308 13107.200 - 13159.839: 97.6313% ( 3) 00:09:01.308 13159.839 - 13212.479: 97.6455% ( 2) 00:09:01.308 13212.479 - 13265.118: 97.6598% ( 2) 00:09:01.308 13265.118 - 13317.757: 97.6812% ( 3) 00:09:01.308 13317.757 - 13370.397: 97.6955% ( 2) 00:09:01.308 13370.397 - 13423.036: 97.7169% ( 3) 00:09:01.308 14633.741 - 14739.020: 97.7454% ( 4) 00:09:01.308 14739.020 - 14844.299: 97.7740% ( 4) 00:09:01.308 14844.299 - 14949.578: 97.8168% ( 6) 00:09:01.308 14949.578 - 15054.856: 97.8525% ( 5) 00:09:01.308 15054.856 - 15160.135: 97.8881% ( 5) 00:09:01.308 15160.135 - 15265.414: 97.9238% ( 5) 00:09:01.308 15265.414 - 15370.692: 97.9595% ( 5) 00:09:01.308 15370.692 - 15475.971: 98.0023% ( 6) 00:09:01.308 15475.971 - 15581.250: 98.0308% ( 4) 00:09:01.308 15581.250 - 15686.529: 98.0736% ( 6) 00:09:01.308 15686.529 - 15791.807: 98.1093% ( 5) 00:09:01.308 15791.807 - 15897.086: 98.1450% ( 5) 00:09:01.308 15897.086 - 16002.365: 98.1735% ( 4) 00:09:01.308 17581.545 - 17686.824: 98.1949% ( 3) 00:09:01.308 17686.824 - 17792.103: 98.2306% ( 5) 00:09:01.308 17792.103 - 17897.382: 98.2877% ( 8) 00:09:01.308 17897.382 - 18002.660: 98.3305% ( 6) 00:09:01.308 18002.660 - 18107.939: 98.3876% ( 8) 00:09:01.308 18107.939 - 18213.218: 98.4304% ( 6) 00:09:01.308 18213.218 - 18318.496: 98.4803% ( 7) 00:09:01.308 18318.496 - 18423.775: 98.5303% ( 7) 00:09:01.308 18423.775 - 18529.054: 98.5802% ( 7) 00:09:01.308 18529.054 - 18634.333: 98.6230% ( 6) 00:09:01.308 18634.333 - 18739.611: 98.6658% ( 6) 00:09:01.308 18739.611 - 18844.890: 98.7015% ( 5) 00:09:01.308 18844.890 - 18950.169: 98.7443% ( 6) 00:09:01.308 18950.169 - 19055.447: 98.7871% ( 6) 00:09:01.308 19055.447 - 19160.726: 98.8299% ( 6) 00:09:01.308 19160.726 - 19266.005: 98.8656% ( 5) 00:09:01.308 19266.005 - 19371.284: 98.9084% ( 6) 00:09:01.308 19371.284 - 19476.562: 98.9512% ( 6) 00:09:01.308 19476.562 - 19581.841: 98.9940% ( 6) 00:09:01.308 19581.841 - 19687.120: 99.0368% ( 6) 00:09:01.308 19687.120 - 19792.398: 99.0796% ( 6) 00:09:01.308 19792.398 - 19897.677: 99.0868% ( 1) 00:09:01.308 38953.124 - 39163.682: 99.1153% ( 4) 00:09:01.308 39163.682 - 39374.239: 99.1581% ( 6) 00:09:01.308 39374.239 - 39584.797: 99.2080% ( 7) 00:09:01.308 39584.797 - 39795.354: 99.2509% ( 6) 00:09:01.308 39795.354 - 40005.912: 99.3008% ( 7) 00:09:01.308 40005.912 - 40216.469: 99.3507% ( 7) 00:09:01.308 40216.469 - 40427.027: 99.3936% ( 6) 00:09:01.308 40427.027 - 40637.584: 99.4364% ( 6) 00:09:01.308 40637.584 - 40848.141: 99.4863% ( 7) 00:09:01.308 40848.141 - 41058.699: 99.5434% ( 8) 00:09:01.308 45690.962 - 45901.520: 99.5648% ( 3) 00:09:01.308 45901.520 - 46112.077: 99.6076% ( 6) 00:09:01.308 46112.077 - 46322.635: 99.6575% ( 7) 00:09:01.308 46322.635 - 46533.192: 99.7003% ( 6) 00:09:01.308 46533.192 - 46743.749: 99.7503% ( 7) 00:09:01.308 46743.749 - 46954.307: 99.8002% ( 7) 00:09:01.308 46954.307 - 47164.864: 99.8502% ( 7) 00:09:01.308 47164.864 - 47375.422: 99.9001% ( 7) 00:09:01.308 47375.422 - 47585.979: 99.9501% ( 7) 00:09:01.308 47585.979 - 47796.537: 99.9929% ( 6) 00:09:01.308 47796.537 - 48007.094: 100.0000% ( 1) 00:09:01.308 00:09:01.308 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:01.308 ============================================================================== 00:09:01.308 Range in us Cumulative IO count 00:09:01.308 7737.986 - 7790.625: 0.0143% ( 2) 00:09:01.308 7790.625 - 7843.264: 0.1142% ( 14) 00:09:01.308 7843.264 - 7895.904: 0.4352% ( 45) 00:09:01.308 7895.904 - 7948.543: 1.0559% ( 87) 00:09:01.308 7948.543 - 8001.182: 2.3545% ( 182) 00:09:01.308 8001.182 - 8053.822: 4.2166% ( 261) 00:09:01.308 8053.822 - 8106.461: 6.7494% ( 355) 00:09:01.308 8106.461 - 8159.100: 9.6747% ( 410) 00:09:01.308 8159.100 - 8211.740: 12.9138% ( 454) 00:09:01.308 8211.740 - 8264.379: 16.4384% ( 494) 00:09:01.308 8264.379 - 8317.018: 20.1127% ( 515) 00:09:01.308 8317.018 - 8369.658: 24.0939% ( 558) 00:09:01.308 8369.658 - 8422.297: 28.2392% ( 581) 00:09:01.308 8422.297 - 8474.937: 32.9195% ( 656) 00:09:01.308 8474.937 - 8527.576: 37.8639% ( 693) 00:09:01.308 8527.576 - 8580.215: 43.1436% ( 740) 00:09:01.308 8580.215 - 8632.855: 48.1878% ( 707) 00:09:01.308 8632.855 - 8685.494: 53.4532% ( 738) 00:09:01.308 8685.494 - 8738.133: 58.6401% ( 727) 00:09:01.308 8738.133 - 8790.773: 63.7343% ( 714) 00:09:01.308 8790.773 - 8843.412: 68.4932% ( 667) 00:09:01.308 8843.412 - 8896.051: 72.8596% ( 612) 00:09:01.308 8896.051 - 8948.691: 76.6981% ( 538) 00:09:01.308 8948.691 - 9001.330: 79.8231% ( 438) 00:09:01.308 9001.330 - 9053.969: 82.6056% ( 390) 00:09:01.308 9053.969 - 9106.609: 84.9743% ( 332) 00:09:01.308 9106.609 - 9159.248: 87.0434% ( 290) 00:09:01.308 9159.248 - 9211.888: 88.8699% ( 256) 00:09:01.308 9211.888 - 9264.527: 90.4181% ( 217) 00:09:01.308 9264.527 - 9317.166: 91.6738% ( 176) 00:09:01.308 9317.166 - 9369.806: 92.5942% ( 129) 00:09:01.308 9369.806 - 9422.445: 93.1507% ( 78) 00:09:01.308 9422.445 - 9475.084: 93.4646% ( 44) 00:09:01.308 9475.084 - 9527.724: 93.7571% ( 41) 00:09:01.308 9527.724 - 9580.363: 93.9355% ( 25) 00:09:01.308 9580.363 - 9633.002: 94.0711% ( 19) 00:09:01.308 9633.002 - 9685.642: 94.1924% ( 17) 00:09:01.308 9685.642 - 9738.281: 94.2994% ( 15) 00:09:01.308 9738.281 - 9790.920: 94.4207% ( 17) 00:09:01.308 9790.920 - 9843.560: 94.5562% ( 19) 00:09:01.308 9843.560 - 9896.199: 94.6632% ( 15) 00:09:01.308 9896.199 - 9948.839: 94.7917% ( 18) 00:09:01.308 9948.839 - 10001.478: 94.9201% ( 18) 00:09:01.308 10001.478 - 10054.117: 95.0485% ( 18) 00:09:01.308 10054.117 - 10106.757: 95.1841% ( 19) 00:09:01.308 10106.757 - 10159.396: 95.3196% ( 19) 00:09:01.308 10159.396 - 10212.035: 95.4409% ( 17) 00:09:01.308 10212.035 - 10264.675: 95.5765% ( 19) 00:09:01.308 10264.675 - 10317.314: 95.7049% ( 18) 00:09:01.308 10317.314 - 10369.953: 95.8262% ( 17) 00:09:01.308 10369.953 - 10422.593: 95.9047% ( 11) 00:09:01.308 10422.593 - 10475.232: 95.9689% ( 9) 00:09:01.308 10475.232 - 10527.871: 96.0331% ( 9) 00:09:01.308 10527.871 - 10580.511: 96.0902% ( 8) 00:09:01.308 10580.511 - 10633.150: 96.1401% ( 7) 00:09:01.308 10633.150 - 10685.790: 96.1901% ( 7) 00:09:01.308 10685.790 - 10738.429: 96.2186% ( 4) 00:09:01.308 10738.429 - 10791.068: 96.2400% ( 3) 00:09:01.308 10791.068 - 10843.708: 96.2543% ( 2) 00:09:01.308 10843.708 - 10896.347: 96.2828% ( 4) 00:09:01.308 10896.347 - 10948.986: 96.3114% ( 4) 00:09:01.308 10948.986 - 11001.626: 96.3613% ( 7) 00:09:01.308 11001.626 - 11054.265: 96.3898% ( 4) 00:09:01.308 11054.265 - 11106.904: 96.4255% ( 5) 00:09:01.308 11106.904 - 11159.544: 96.4469% ( 3) 00:09:01.308 11159.544 - 11212.183: 96.4612% ( 2) 00:09:01.308 11212.183 - 11264.822: 96.4755% ( 2) 00:09:01.308 11264.822 - 11317.462: 96.4897% ( 2) 00:09:01.308 11317.462 - 11370.101: 96.5111% ( 3) 00:09:01.308 11370.101 - 11422.741: 96.5254% ( 2) 00:09:01.308 11422.741 - 11475.380: 96.5397% ( 2) 00:09:01.308 11475.380 - 11528.019: 96.5539% ( 2) 00:09:01.308 11528.019 - 11580.659: 96.5753% ( 3) 00:09:01.308 11580.659 - 11633.298: 96.5967% ( 3) 00:09:01.308 11633.298 - 11685.937: 96.6396% ( 6) 00:09:01.308 11685.937 - 11738.577: 96.6824% ( 6) 00:09:01.308 11738.577 - 11791.216: 96.7109% ( 4) 00:09:01.308 11791.216 - 11843.855: 96.7608% ( 7) 00:09:01.308 11843.855 - 11896.495: 96.8037% ( 6) 00:09:01.308 11896.495 - 11949.134: 96.8322% ( 4) 00:09:01.308 11949.134 - 12001.773: 96.8750% ( 6) 00:09:01.308 12001.773 - 12054.413: 96.9035% ( 4) 00:09:01.308 12054.413 - 12107.052: 96.9535% ( 7) 00:09:01.308 12107.052 - 12159.692: 97.0034% ( 7) 00:09:01.308 12159.692 - 12212.331: 97.0748% ( 10) 00:09:01.308 12212.331 - 12264.970: 97.1318% ( 8) 00:09:01.308 12264.970 - 12317.610: 97.1675% ( 5) 00:09:01.308 12317.610 - 12370.249: 97.2103% ( 6) 00:09:01.308 12370.249 - 12422.888: 97.2603% ( 7) 00:09:01.308 12422.888 - 12475.528: 97.2959% ( 5) 00:09:01.308 12475.528 - 12528.167: 97.3388% ( 6) 00:09:01.308 12528.167 - 12580.806: 97.3816% ( 6) 00:09:01.308 12580.806 - 12633.446: 97.4172% ( 5) 00:09:01.308 12633.446 - 12686.085: 97.4600% ( 6) 00:09:01.308 12686.085 - 12738.724: 97.5100% ( 7) 00:09:01.308 12738.724 - 12791.364: 97.5457% ( 5) 00:09:01.308 12791.364 - 12844.003: 97.5956% ( 7) 00:09:01.308 12844.003 - 12896.643: 97.6313% ( 5) 00:09:01.308 12896.643 - 12949.282: 97.6598% ( 4) 00:09:01.308 12949.282 - 13001.921: 97.6884% ( 4) 00:09:01.308 13001.921 - 13054.561: 97.7098% ( 3) 00:09:01.308 13054.561 - 13107.200: 97.7169% ( 1) 00:09:01.308 13896.790 - 14002.069: 97.7240% ( 1) 00:09:01.308 14002.069 - 14107.348: 97.7526% ( 4) 00:09:01.308 14107.348 - 14212.627: 97.7954% ( 6) 00:09:01.308 14212.627 - 14317.905: 97.8239% ( 4) 00:09:01.308 14317.905 - 14423.184: 97.8596% ( 5) 00:09:01.308 14423.184 - 14528.463: 97.8953% ( 5) 00:09:01.308 14528.463 - 14633.741: 97.9309% ( 5) 00:09:01.308 14633.741 - 14739.020: 97.9666% ( 5) 00:09:01.308 14739.020 - 14844.299: 97.9951% ( 4) 00:09:01.308 14844.299 - 14949.578: 98.0237% ( 4) 00:09:01.308 14949.578 - 15054.856: 98.0594% ( 5) 00:09:01.308 15054.856 - 15160.135: 98.0950% ( 5) 00:09:01.308 15160.135 - 15265.414: 98.1307% ( 5) 00:09:01.308 15265.414 - 15370.692: 98.1521% ( 3) 00:09:01.308 15370.692 - 15475.971: 98.1735% ( 3) 00:09:01.308 18318.496 - 18423.775: 98.2021% ( 4) 00:09:01.309 18423.775 - 18529.054: 98.3091% ( 15) 00:09:01.309 18529.054 - 18634.333: 98.3876% ( 11) 00:09:01.309 18634.333 - 18739.611: 98.4660% ( 11) 00:09:01.309 18739.611 - 18844.890: 98.5588% ( 13) 00:09:01.309 18844.890 - 18950.169: 98.6444% ( 12) 00:09:01.309 18950.169 - 19055.447: 98.7229% ( 11) 00:09:01.309 19055.447 - 19160.726: 98.8085% ( 12) 00:09:01.309 19160.726 - 19266.005: 98.8941% ( 12) 00:09:01.309 19266.005 - 19371.284: 98.9655% ( 10) 00:09:01.309 19371.284 - 19476.562: 99.0582% ( 13) 00:09:01.309 19476.562 - 19581.841: 99.0868% ( 4) 00:09:01.309 36636.993 - 36847.550: 99.1224% ( 5) 00:09:01.309 36847.550 - 37058.108: 99.1652% ( 6) 00:09:01.309 37058.108 - 37268.665: 99.2152% ( 7) 00:09:01.309 37268.665 - 37479.222: 99.2723% ( 8) 00:09:01.309 37479.222 - 37689.780: 99.3151% ( 6) 00:09:01.309 37689.780 - 37900.337: 99.3650% ( 7) 00:09:01.309 37900.337 - 38110.895: 99.4078% ( 6) 00:09:01.309 38110.895 - 38321.452: 99.4649% ( 8) 00:09:01.309 38321.452 - 38532.010: 99.5148% ( 7) 00:09:01.309 38532.010 - 38742.567: 99.5434% ( 4) 00:09:01.309 43374.831 - 43585.388: 99.5648% ( 3) 00:09:01.309 43585.388 - 43795.945: 99.6147% ( 7) 00:09:01.309 43795.945 - 44006.503: 99.6718% ( 8) 00:09:01.309 44006.503 - 44217.060: 99.7146% ( 6) 00:09:01.309 44217.060 - 44427.618: 99.7503% ( 5) 00:09:01.309 44427.618 - 44638.175: 99.8002% ( 7) 00:09:01.309 44638.175 - 44848.733: 99.8502% ( 7) 00:09:01.309 44848.733 - 45059.290: 99.8930% ( 6) 00:09:01.309 45059.290 - 45269.847: 99.9429% ( 7) 00:09:01.309 45269.847 - 45480.405: 99.9857% ( 6) 00:09:01.309 45480.405 - 45690.962: 100.0000% ( 2) 00:09:01.309 00:09:01.309 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:01.309 ============================================================================== 00:09:01.309 Range in us Cumulative IO count 00:09:01.309 7737.986 - 7790.625: 0.0571% ( 8) 00:09:01.309 7790.625 - 7843.264: 0.1570% ( 14) 00:09:01.309 7843.264 - 7895.904: 0.4209% ( 37) 00:09:01.309 7895.904 - 7948.543: 1.1487% ( 102) 00:09:01.309 7948.543 - 8001.182: 2.4472% ( 182) 00:09:01.309 8001.182 - 8053.822: 4.3022% ( 260) 00:09:01.309 8053.822 - 8106.461: 6.6924% ( 335) 00:09:01.309 8106.461 - 8159.100: 9.6818% ( 419) 00:09:01.309 8159.100 - 8211.740: 12.8353% ( 442) 00:09:01.309 8211.740 - 8264.379: 16.2029% ( 472) 00:09:01.309 8264.379 - 8317.018: 19.9700% ( 528) 00:09:01.309 8317.018 - 8369.658: 24.1153% ( 581) 00:09:01.309 8369.658 - 8422.297: 28.4603% ( 609) 00:09:01.309 8422.297 - 8474.937: 33.1906% ( 663) 00:09:01.309 8474.937 - 8527.576: 38.0565% ( 682) 00:09:01.309 8527.576 - 8580.215: 43.1864% ( 719) 00:09:01.309 8580.215 - 8632.855: 48.3233% ( 720) 00:09:01.309 8632.855 - 8685.494: 53.4817% ( 723) 00:09:01.309 8685.494 - 8738.133: 58.6330% ( 722) 00:09:01.309 8738.133 - 8790.773: 63.6915% ( 709) 00:09:01.309 8790.773 - 8843.412: 68.5146% ( 676) 00:09:01.309 8843.412 - 8896.051: 72.9024% ( 615) 00:09:01.309 8896.051 - 8948.691: 76.7195% ( 535) 00:09:01.309 8948.691 - 9001.330: 79.8231% ( 435) 00:09:01.309 9001.330 - 9053.969: 82.5485% ( 382) 00:09:01.309 9053.969 - 9106.609: 84.8530% ( 323) 00:09:01.309 9106.609 - 9159.248: 87.0362% ( 306) 00:09:01.309 9159.248 - 9211.888: 88.7700% ( 243) 00:09:01.309 9211.888 - 9264.527: 90.2326% ( 205) 00:09:01.309 9264.527 - 9317.166: 91.4098% ( 165) 00:09:01.309 9317.166 - 9369.806: 92.2803% ( 122) 00:09:01.309 9369.806 - 9422.445: 92.8653% ( 82) 00:09:01.309 9422.445 - 9475.084: 93.2934% ( 60) 00:09:01.309 9475.084 - 9527.724: 93.6287% ( 47) 00:09:01.309 9527.724 - 9580.363: 93.8428% ( 30) 00:09:01.309 9580.363 - 9633.002: 94.0711% ( 32) 00:09:01.309 9633.002 - 9685.642: 94.3350% ( 37) 00:09:01.309 9685.642 - 9738.281: 94.5134% ( 25) 00:09:01.309 9738.281 - 9790.920: 94.6775% ( 23) 00:09:01.309 9790.920 - 9843.560: 94.7917% ( 16) 00:09:01.309 9843.560 - 9896.199: 94.8916% ( 14) 00:09:01.309 9896.199 - 9948.839: 95.0057% ( 16) 00:09:01.309 9948.839 - 10001.478: 95.1127% ( 15) 00:09:01.309 10001.478 - 10054.117: 95.2126% ( 14) 00:09:01.309 10054.117 - 10106.757: 95.3196% ( 15) 00:09:01.309 10106.757 - 10159.396: 95.4267% ( 15) 00:09:01.309 10159.396 - 10212.035: 95.5337% ( 15) 00:09:01.309 10212.035 - 10264.675: 95.6550% ( 17) 00:09:01.309 10264.675 - 10317.314: 95.7477% ( 13) 00:09:01.309 10317.314 - 10369.953: 95.8619% ( 16) 00:09:01.309 10369.953 - 10422.593: 95.9189% ( 8) 00:09:01.309 10422.593 - 10475.232: 95.9832% ( 9) 00:09:01.309 10475.232 - 10527.871: 96.0402% ( 8) 00:09:01.309 10527.871 - 10580.511: 96.0973% ( 8) 00:09:01.309 10580.511 - 10633.150: 96.1401% ( 6) 00:09:01.309 10633.150 - 10685.790: 96.1758% ( 5) 00:09:01.309 10685.790 - 10738.429: 96.2043% ( 4) 00:09:01.309 10738.429 - 10791.068: 96.2471% ( 6) 00:09:01.309 10791.068 - 10843.708: 96.2757% ( 4) 00:09:01.309 10843.708 - 10896.347: 96.3114% ( 5) 00:09:01.309 10896.347 - 10948.986: 96.3256% ( 2) 00:09:01.309 10948.986 - 11001.626: 96.3613% ( 5) 00:09:01.309 11001.626 - 11054.265: 96.3756% ( 2) 00:09:01.309 11054.265 - 11106.904: 96.3898% ( 2) 00:09:01.309 11106.904 - 11159.544: 96.4041% ( 2) 00:09:01.309 11159.544 - 11212.183: 96.4326% ( 4) 00:09:01.309 11212.183 - 11264.822: 96.4683% ( 5) 00:09:01.309 11264.822 - 11317.462: 96.5040% ( 5) 00:09:01.309 11317.462 - 11370.101: 96.5325% ( 4) 00:09:01.309 11370.101 - 11422.741: 96.5611% ( 4) 00:09:01.309 11422.741 - 11475.380: 96.5967% ( 5) 00:09:01.309 11475.380 - 11528.019: 96.6253% ( 4) 00:09:01.309 11528.019 - 11580.659: 96.6681% ( 6) 00:09:01.309 11580.659 - 11633.298: 96.6966% ( 4) 00:09:01.309 11633.298 - 11685.937: 96.7252% ( 4) 00:09:01.309 11685.937 - 11738.577: 96.7680% ( 6) 00:09:01.309 11738.577 - 11791.216: 96.8037% ( 5) 00:09:01.309 11791.216 - 11843.855: 96.8322% ( 4) 00:09:01.309 11843.855 - 11896.495: 96.8607% ( 4) 00:09:01.309 11896.495 - 11949.134: 96.8964% ( 5) 00:09:01.309 11949.134 - 12001.773: 96.9321% ( 5) 00:09:01.309 12001.773 - 12054.413: 96.9678% ( 5) 00:09:01.309 12054.413 - 12107.052: 97.0106% ( 6) 00:09:01.309 12107.052 - 12159.692: 97.0391% ( 4) 00:09:01.309 12159.692 - 12212.331: 97.0819% ( 6) 00:09:01.309 12212.331 - 12264.970: 97.1104% ( 4) 00:09:01.309 12264.970 - 12317.610: 97.1533% ( 6) 00:09:01.309 12317.610 - 12370.249: 97.1747% ( 3) 00:09:01.309 12370.249 - 12422.888: 97.1961% ( 3) 00:09:01.309 12422.888 - 12475.528: 97.2103% ( 2) 00:09:01.309 12475.528 - 12528.167: 97.2246% ( 2) 00:09:01.309 12528.167 - 12580.806: 97.2389% ( 2) 00:09:01.309 12580.806 - 12633.446: 97.2603% ( 3) 00:09:01.309 13265.118 - 13317.757: 97.2674% ( 1) 00:09:01.309 13317.757 - 13370.397: 97.2817% ( 2) 00:09:01.309 13370.397 - 13423.036: 97.3031% ( 3) 00:09:01.309 13423.036 - 13475.676: 97.3245% ( 3) 00:09:01.309 13475.676 - 13580.954: 97.3744% ( 7) 00:09:01.309 13580.954 - 13686.233: 97.4386% ( 9) 00:09:01.309 13686.233 - 13791.512: 97.5243% ( 12) 00:09:01.309 13791.512 - 13896.790: 97.6099% ( 12) 00:09:01.309 13896.790 - 14002.069: 97.6884% ( 11) 00:09:01.309 14002.069 - 14107.348: 97.7740% ( 12) 00:09:01.309 14107.348 - 14212.627: 97.8596% ( 12) 00:09:01.309 14212.627 - 14317.905: 97.9381% ( 11) 00:09:01.309 14317.905 - 14423.184: 97.9737% ( 5) 00:09:01.309 14423.184 - 14528.463: 98.0023% ( 4) 00:09:01.309 14528.463 - 14633.741: 98.0308% ( 4) 00:09:01.309 14633.741 - 14739.020: 98.0665% ( 5) 00:09:01.309 14739.020 - 14844.299: 98.1022% ( 5) 00:09:01.309 14844.299 - 14949.578: 98.1378% ( 5) 00:09:01.309 14949.578 - 15054.856: 98.1664% ( 4) 00:09:01.309 15054.856 - 15160.135: 98.1735% ( 1) 00:09:01.309 17686.824 - 17792.103: 98.2163% ( 6) 00:09:01.309 17792.103 - 17897.382: 98.2520% ( 5) 00:09:01.309 17897.382 - 18002.660: 98.2948% ( 6) 00:09:01.309 18002.660 - 18107.939: 98.3376% ( 6) 00:09:01.309 18107.939 - 18213.218: 98.3733% ( 5) 00:09:01.309 18213.218 - 18318.496: 98.4090% ( 5) 00:09:01.309 18318.496 - 18423.775: 98.4446% ( 5) 00:09:01.309 18423.775 - 18529.054: 98.4874% ( 6) 00:09:01.309 18529.054 - 18634.333: 98.5160% ( 4) 00:09:01.309 18634.333 - 18739.611: 98.5588% ( 6) 00:09:01.309 18739.611 - 18844.890: 98.5873% ( 4) 00:09:01.309 18844.890 - 18950.169: 98.6230% ( 5) 00:09:01.309 18950.169 - 19055.447: 98.6587% ( 5) 00:09:01.309 19055.447 - 19160.726: 98.7015% ( 6) 00:09:01.309 19160.726 - 19266.005: 98.7514% ( 7) 00:09:01.309 19266.005 - 19371.284: 98.8014% ( 7) 00:09:01.309 19371.284 - 19476.562: 98.8442% ( 6) 00:09:01.310 19476.562 - 19581.841: 98.8870% ( 6) 00:09:01.310 19581.841 - 19687.120: 98.9227% ( 5) 00:09:01.310 19687.120 - 19792.398: 98.9655% ( 6) 00:09:01.310 19792.398 - 19897.677: 99.0083% ( 6) 00:09:01.310 19897.677 - 20002.956: 99.0511% ( 6) 00:09:01.310 20002.956 - 20108.235: 99.0868% ( 5) 00:09:01.310 34531.418 - 34741.976: 99.1296% ( 6) 00:09:01.310 34741.976 - 34952.533: 99.1866% ( 8) 00:09:01.310 34952.533 - 35163.091: 99.2295% ( 6) 00:09:01.310 35163.091 - 35373.648: 99.2865% ( 8) 00:09:01.310 35373.648 - 35584.206: 99.3365% ( 7) 00:09:01.310 35584.206 - 35794.763: 99.3864% ( 7) 00:09:01.310 35794.763 - 36005.320: 99.4364% ( 7) 00:09:01.310 36005.320 - 36215.878: 99.4863% ( 7) 00:09:01.310 36215.878 - 36426.435: 99.5362% ( 7) 00:09:01.310 36426.435 - 36636.993: 99.5434% ( 1) 00:09:01.310 41058.699 - 41269.256: 99.5505% ( 1) 00:09:01.310 41269.256 - 41479.814: 99.6005% ( 7) 00:09:01.310 41479.814 - 41690.371: 99.6433% ( 6) 00:09:01.310 41690.371 - 41900.929: 99.6932% ( 7) 00:09:01.310 41900.929 - 42111.486: 99.7360% ( 6) 00:09:01.310 42111.486 - 42322.043: 99.7788% ( 6) 00:09:01.310 42322.043 - 42532.601: 99.8288% ( 7) 00:09:01.310 42532.601 - 42743.158: 99.8858% ( 8) 00:09:01.310 42743.158 - 42953.716: 99.9358% ( 7) 00:09:01.310 42953.716 - 43164.273: 99.9929% ( 8) 00:09:01.310 43164.273 - 43374.831: 100.0000% ( 1) 00:09:01.310 00:09:01.310 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:01.310 ============================================================================== 00:09:01.310 Range in us Cumulative IO count 00:09:01.310 7737.986 - 7790.625: 0.0213% ( 3) 00:09:01.310 7790.625 - 7843.264: 0.1491% ( 18) 00:09:01.310 7843.264 - 7895.904: 0.4688% ( 45) 00:09:01.310 7895.904 - 7948.543: 1.0227% ( 78) 00:09:01.310 7948.543 - 8001.182: 2.2869% ( 178) 00:09:01.310 8001.182 - 8053.822: 4.3608% ( 292) 00:09:01.310 8053.822 - 8106.461: 6.6903% ( 328) 00:09:01.310 8106.461 - 8159.100: 9.5170% ( 398) 00:09:01.310 8159.100 - 8211.740: 12.6918% ( 447) 00:09:01.310 8211.740 - 8264.379: 16.1293% ( 484) 00:09:01.310 8264.379 - 8317.018: 19.9432% ( 537) 00:09:01.310 8317.018 - 8369.658: 24.0838% ( 583) 00:09:01.310 8369.658 - 8422.297: 28.3168% ( 596) 00:09:01.310 8422.297 - 8474.937: 33.1250% ( 677) 00:09:01.310 8474.937 - 8527.576: 38.1747% ( 711) 00:09:01.310 8527.576 - 8580.215: 43.2031% ( 708) 00:09:01.310 8580.215 - 8632.855: 48.2670% ( 713) 00:09:01.310 8632.855 - 8685.494: 53.4304% ( 727) 00:09:01.310 8685.494 - 8738.133: 58.5298% ( 718) 00:09:01.310 8738.133 - 8790.773: 63.6080% ( 715) 00:09:01.310 8790.773 - 8843.412: 68.3949% ( 674) 00:09:01.310 8843.412 - 8896.051: 72.6847% ( 604) 00:09:01.310 8896.051 - 8948.691: 76.4205% ( 526) 00:09:01.310 8948.691 - 9001.330: 79.6023% ( 448) 00:09:01.310 9001.330 - 9053.969: 82.2798% ( 377) 00:09:01.310 9053.969 - 9106.609: 84.5739% ( 323) 00:09:01.310 9106.609 - 9159.248: 86.6406% ( 291) 00:09:01.310 9159.248 - 9211.888: 88.2457% ( 226) 00:09:01.310 9211.888 - 9264.527: 89.7301% ( 209) 00:09:01.310 9264.527 - 9317.166: 90.8949% ( 164) 00:09:01.310 9317.166 - 9369.806: 91.7330% ( 118) 00:09:01.310 9369.806 - 9422.445: 92.4077% ( 95) 00:09:01.310 9422.445 - 9475.084: 92.8409% ( 61) 00:09:01.310 9475.084 - 9527.724: 93.1747% ( 47) 00:09:01.310 9527.724 - 9580.363: 93.5085% ( 47) 00:09:01.310 9580.363 - 9633.002: 93.7429% ( 33) 00:09:01.310 9633.002 - 9685.642: 93.9702% ( 32) 00:09:01.310 9685.642 - 9738.281: 94.1690% ( 28) 00:09:01.310 9738.281 - 9790.920: 94.2969% ( 18) 00:09:01.310 9790.920 - 9843.560: 94.3963% ( 14) 00:09:01.310 9843.560 - 9896.199: 94.4957% ( 14) 00:09:01.310 9896.199 - 9948.839: 94.5810% ( 12) 00:09:01.310 9948.839 - 10001.478: 94.6804% ( 14) 00:09:01.310 10001.478 - 10054.117: 94.8153% ( 19) 00:09:01.310 10054.117 - 10106.757: 94.9432% ( 18) 00:09:01.310 10106.757 - 10159.396: 95.0639% ( 17) 00:09:01.310 10159.396 - 10212.035: 95.1705% ( 15) 00:09:01.310 10212.035 - 10264.675: 95.3054% ( 19) 00:09:01.310 10264.675 - 10317.314: 95.4048% ( 14) 00:09:01.310 10317.314 - 10369.953: 95.5256% ( 17) 00:09:01.310 10369.953 - 10422.593: 95.6321% ( 15) 00:09:01.310 10422.593 - 10475.232: 95.7173% ( 12) 00:09:01.310 10475.232 - 10527.871: 95.7741% ( 8) 00:09:01.310 10527.871 - 10580.511: 95.8381% ( 9) 00:09:01.310 10580.511 - 10633.150: 95.9020% ( 9) 00:09:01.310 10633.150 - 10685.790: 95.9730% ( 10) 00:09:01.310 10685.790 - 10738.429: 96.0511% ( 11) 00:09:01.310 10738.429 - 10791.068: 96.1080% ( 8) 00:09:01.310 10791.068 - 10843.708: 96.1648% ( 8) 00:09:01.310 10843.708 - 10896.347: 96.2216% ( 8) 00:09:01.310 10896.347 - 10948.986: 96.2926% ( 10) 00:09:01.310 10948.986 - 11001.626: 96.3494% ( 8) 00:09:01.310 11001.626 - 11054.265: 96.4062% ( 8) 00:09:01.310 11054.265 - 11106.904: 96.4418% ( 5) 00:09:01.310 11106.904 - 11159.544: 96.4773% ( 5) 00:09:01.310 11159.544 - 11212.183: 96.5057% ( 4) 00:09:01.310 11212.183 - 11264.822: 96.5412% ( 5) 00:09:01.310 11264.822 - 11317.462: 96.5838% ( 6) 00:09:01.310 11317.462 - 11370.101: 96.5980% ( 2) 00:09:01.310 11370.101 - 11422.741: 96.6193% ( 3) 00:09:01.310 11422.741 - 11475.380: 96.6335% ( 2) 00:09:01.310 11475.380 - 11528.019: 96.6477% ( 2) 00:09:01.310 11528.019 - 11580.659: 96.6974% ( 7) 00:09:01.310 11580.659 - 11633.298: 96.7472% ( 7) 00:09:01.310 11633.298 - 11685.937: 96.7685% ( 3) 00:09:01.310 11685.937 - 11738.577: 96.7969% ( 4) 00:09:01.310 11738.577 - 11791.216: 96.8395% ( 6) 00:09:01.310 11791.216 - 11843.855: 96.8679% ( 4) 00:09:01.310 11843.855 - 11896.495: 96.9034% ( 5) 00:09:01.310 11896.495 - 11949.134: 96.9318% ( 4) 00:09:01.310 11949.134 - 12001.773: 96.9673% ( 5) 00:09:01.310 12001.773 - 12054.413: 96.9957% ( 4) 00:09:01.310 12054.413 - 12107.052: 97.0099% ( 2) 00:09:01.310 12107.052 - 12159.692: 97.0312% ( 3) 00:09:01.310 12159.692 - 12212.331: 97.0455% ( 2) 00:09:01.310 12212.331 - 12264.970: 97.0668% ( 3) 00:09:01.310 12264.970 - 12317.610: 97.0810% ( 2) 00:09:01.310 12317.610 - 12370.249: 97.1023% ( 3) 00:09:01.310 12370.249 - 12422.888: 97.1165% ( 2) 00:09:01.310 12422.888 - 12475.528: 97.1307% ( 2) 00:09:01.310 12475.528 - 12528.167: 97.1520% ( 3) 00:09:01.310 12528.167 - 12580.806: 97.1662% ( 2) 00:09:01.310 12580.806 - 12633.446: 97.1804% ( 2) 00:09:01.310 12633.446 - 12686.085: 97.2017% ( 3) 00:09:01.310 12686.085 - 12738.724: 97.2159% ( 2) 00:09:01.310 12738.724 - 12791.364: 97.2301% ( 2) 00:09:01.310 12791.364 - 12844.003: 97.2443% ( 2) 00:09:01.310 12844.003 - 12896.643: 97.2585% ( 2) 00:09:01.310 12896.643 - 12949.282: 97.2727% ( 2) 00:09:01.310 13265.118 - 13317.757: 97.2869% ( 2) 00:09:01.310 13317.757 - 13370.397: 97.3011% ( 2) 00:09:01.310 13370.397 - 13423.036: 97.3224% ( 3) 00:09:01.310 13423.036 - 13475.676: 97.3295% ( 1) 00:09:01.310 13475.676 - 13580.954: 97.3651% ( 5) 00:09:01.310 13580.954 - 13686.233: 97.4006% ( 5) 00:09:01.310 13686.233 - 13791.512: 97.4361% ( 5) 00:09:01.310 13791.512 - 13896.790: 97.4716% ( 5) 00:09:01.310 13896.790 - 14002.069: 97.5071% ( 5) 00:09:01.310 14002.069 - 14107.348: 97.5355% ( 4) 00:09:01.310 14107.348 - 14212.627: 97.5639% ( 4) 00:09:01.310 14212.627 - 14317.905: 97.6349% ( 10) 00:09:01.310 14317.905 - 14423.184: 97.7273% ( 13) 00:09:01.310 14423.184 - 14528.463: 97.8054% ( 11) 00:09:01.310 14528.463 - 14633.741: 97.8906% ( 12) 00:09:01.310 14633.741 - 14739.020: 97.9688% ( 11) 00:09:01.310 14739.020 - 14844.299: 98.0185% ( 7) 00:09:01.310 14844.299 - 14949.578: 98.0682% ( 7) 00:09:01.310 14949.578 - 15054.856: 98.1179% ( 7) 00:09:01.310 15054.856 - 15160.135: 98.1676% ( 7) 00:09:01.310 15160.135 - 15265.414: 98.1818% ( 2) 00:09:01.310 17160.431 - 17265.709: 98.2102% ( 4) 00:09:01.310 17265.709 - 17370.988: 98.2457% ( 5) 00:09:01.310 17370.988 - 17476.267: 98.2812% ( 5) 00:09:01.310 17476.267 - 17581.545: 98.3239% ( 6) 00:09:01.310 17581.545 - 17686.824: 98.3594% ( 5) 00:09:01.310 17686.824 - 17792.103: 98.3949% ( 5) 00:09:01.310 17792.103 - 17897.382: 98.4304% ( 5) 00:09:01.310 17897.382 - 18002.660: 98.4730% ( 6) 00:09:01.310 18002.660 - 18107.939: 98.5085% ( 5) 00:09:01.310 18107.939 - 18213.218: 98.5511% ( 6) 00:09:01.310 18213.218 - 18318.496: 98.5795% ( 4) 00:09:01.310 18318.496 - 18423.775: 98.6151% ( 5) 00:09:01.310 18423.775 - 18529.054: 98.6364% ( 3) 00:09:01.310 19476.562 - 19581.841: 98.6577% ( 3) 00:09:01.310 19581.841 - 19687.120: 98.7003% ( 6) 00:09:01.310 19687.120 - 19792.398: 98.7429% ( 6) 00:09:01.310 19792.398 - 19897.677: 98.7855% ( 6) 00:09:01.310 19897.677 - 20002.956: 98.8281% ( 6) 00:09:01.310 20002.956 - 20108.235: 98.8707% ( 6) 00:09:01.310 20108.235 - 20213.513: 98.9205% ( 7) 00:09:01.310 20213.513 - 20318.792: 98.9631% ( 6) 00:09:01.310 20318.792 - 20424.071: 98.9986% ( 5) 00:09:01.310 20424.071 - 20529.349: 99.0341% ( 5) 00:09:01.310 20529.349 - 20634.628: 99.0767% ( 6) 00:09:01.310 20634.628 - 20739.907: 99.0909% ( 2) 00:09:01.310 26846.072 - 26951.351: 99.1051% ( 2) 00:09:01.310 26951.351 - 27161.908: 99.1406% ( 5) 00:09:01.310 27161.908 - 27372.466: 99.1903% ( 7) 00:09:01.310 27372.466 - 27583.023: 99.2401% ( 7) 00:09:01.310 27583.023 - 27793.581: 99.2898% ( 7) 00:09:01.310 27793.581 - 28004.138: 99.3395% ( 7) 00:09:01.310 28004.138 - 28214.696: 99.3892% ( 7) 00:09:01.310 28214.696 - 28425.253: 99.4389% ( 7) 00:09:01.310 28425.253 - 28635.810: 99.4815% ( 6) 00:09:01.310 28635.810 - 28846.368: 99.5384% ( 8) 00:09:01.310 28846.368 - 29056.925: 99.5455% ( 1) 00:09:01.310 33689.189 - 33899.746: 99.5668% ( 3) 00:09:01.310 33899.746 - 34110.304: 99.6094% ( 6) 00:09:01.310 34110.304 - 34320.861: 99.6591% ( 7) 00:09:01.310 34320.861 - 34531.418: 99.7159% ( 8) 00:09:01.310 34531.418 - 34741.976: 99.7656% ( 7) 00:09:01.310 34741.976 - 34952.533: 99.8153% ( 7) 00:09:01.310 34952.533 - 35163.091: 99.8651% ( 7) 00:09:01.310 35163.091 - 35373.648: 99.9148% ( 7) 00:09:01.311 35373.648 - 35584.206: 99.9645% ( 7) 00:09:01.311 35584.206 - 35794.763: 100.0000% ( 5) 00:09:01.311 00:09:01.311 13:06:52 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:02.691 Initializing NVMe Controllers 00:09:02.691 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:02.691 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:02.691 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:02.691 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:02.691 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:02.691 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:02.691 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:02.691 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:02.691 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:02.691 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:02.691 Initialization complete. Launching workers. 00:09:02.691 ======================================================== 00:09:02.691 Latency(us) 00:09:02.691 Device Information : IOPS MiB/s Average min max 00:09:02.691 PCIE (0000:00:10.0) NSID 1 from core 0: 11370.66 133.25 11317.59 8460.93 47540.70 00:09:02.691 PCIE (0000:00:11.0) NSID 1 from core 0: 11370.66 133.25 11305.05 8610.62 45996.90 00:09:02.691 PCIE (0000:00:13.0) NSID 1 from core 0: 11370.66 133.25 11290.09 8604.31 44747.44 00:09:02.691 PCIE (0000:00:12.0) NSID 1 from core 0: 11370.66 133.25 11276.06 8486.01 43092.60 00:09:02.691 PCIE (0000:00:12.0) NSID 2 from core 0: 11370.66 133.25 11263.25 8400.09 41657.81 00:09:02.691 PCIE (0000:00:12.0) NSID 3 from core 0: 11434.19 133.99 11186.65 8458.22 31247.60 00:09:02.691 ======================================================== 00:09:02.691 Total : 68287.50 800.24 11273.03 8400.09 47540.70 00:09:02.691 00:09:02.691 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:02.691 ================================================================================= 00:09:02.691 1.00000% : 8790.773us 00:09:02.691 10.00000% : 9211.888us 00:09:02.691 25.00000% : 9633.002us 00:09:02.691 50.00000% : 10212.035us 00:09:02.691 75.00000% : 11422.741us 00:09:02.691 90.00000% : 15054.856us 00:09:02.691 95.00000% : 17370.988us 00:09:02.691 98.00000% : 18844.890us 00:09:02.691 99.00000% : 34952.533us 00:09:02.691 99.50000% : 44848.733us 00:09:02.691 99.90000% : 47164.864us 00:09:02.691 99.99000% : 47585.979us 00:09:02.691 99.99900% : 47585.979us 00:09:02.691 99.99990% : 47585.979us 00:09:02.691 99.99999% : 47585.979us 00:09:02.691 00:09:02.691 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:02.691 ================================================================================= 00:09:02.691 1.00000% : 8896.051us 00:09:02.691 10.00000% : 9211.888us 00:09:02.691 25.00000% : 9633.002us 00:09:02.691 50.00000% : 10159.396us 00:09:02.691 75.00000% : 11264.822us 00:09:02.691 90.00000% : 15160.135us 00:09:02.691 95.00000% : 17265.709us 00:09:02.691 98.00000% : 18634.333us 00:09:02.691 99.00000% : 33478.631us 00:09:02.691 99.50000% : 43585.388us 00:09:02.691 99.90000% : 45690.962us 00:09:02.691 99.99000% : 46112.077us 00:09:02.691 99.99900% : 46112.077us 00:09:02.691 99.99990% : 46112.077us 00:09:02.691 99.99999% : 46112.077us 00:09:02.691 00:09:02.691 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:02.691 ================================================================================= 00:09:02.691 1.00000% : 8896.051us 00:09:02.691 10.00000% : 9264.527us 00:09:02.691 25.00000% : 9633.002us 00:09:02.691 50.00000% : 10159.396us 00:09:02.691 75.00000% : 11370.101us 00:09:02.691 90.00000% : 15054.856us 00:09:02.691 95.00000% : 17055.152us 00:09:02.691 98.00000% : 18844.890us 00:09:02.691 99.00000% : 32425.844us 00:09:02.691 99.50000% : 42322.043us 00:09:02.691 99.90000% : 44427.618us 00:09:02.691 99.99000% : 44848.733us 00:09:02.691 99.99900% : 44848.733us 00:09:02.691 99.99990% : 44848.733us 00:09:02.691 99.99999% : 44848.733us 00:09:02.691 00:09:02.691 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:02.691 ================================================================================= 00:09:02.691 1.00000% : 8896.051us 00:09:02.691 10.00000% : 9264.527us 00:09:02.691 25.00000% : 9633.002us 00:09:02.691 50.00000% : 10159.396us 00:09:02.691 75.00000% : 11317.462us 00:09:02.691 90.00000% : 15054.856us 00:09:02.691 95.00000% : 16739.316us 00:09:02.691 98.00000% : 19055.447us 00:09:02.691 99.00000% : 30951.942us 00:09:02.691 99.50000% : 40637.584us 00:09:02.691 99.90000% : 42743.158us 00:09:02.691 99.99000% : 43164.273us 00:09:02.691 99.99900% : 43164.273us 00:09:02.691 99.99990% : 43164.273us 00:09:02.691 99.99999% : 43164.273us 00:09:02.691 00:09:02.691 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:02.691 ================================================================================= 00:09:02.691 1.00000% : 8843.412us 00:09:02.691 10.00000% : 9211.888us 00:09:02.691 25.00000% : 9685.642us 00:09:02.691 50.00000% : 10159.396us 00:09:02.691 75.00000% : 11475.380us 00:09:02.691 90.00000% : 14949.578us 00:09:02.691 95.00000% : 16844.594us 00:09:02.691 98.00000% : 19371.284us 00:09:02.691 99.00000% : 29267.483us 00:09:02.691 99.50000% : 39163.682us 00:09:02.691 99.90000% : 41269.256us 00:09:02.691 99.99000% : 41690.371us 00:09:02.691 99.99900% : 41690.371us 00:09:02.691 99.99990% : 41690.371us 00:09:02.691 99.99999% : 41690.371us 00:09:02.691 00:09:02.691 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:02.691 ================================================================================= 00:09:02.691 1.00000% : 8843.412us 00:09:02.691 10.00000% : 9264.527us 00:09:02.691 25.00000% : 9633.002us 00:09:02.691 50.00000% : 10159.396us 00:09:02.691 75.00000% : 11475.380us 00:09:02.691 90.00000% : 15160.135us 00:09:02.691 95.00000% : 17160.431us 00:09:02.691 98.00000% : 19687.120us 00:09:02.691 99.00000% : 20739.907us 00:09:02.691 99.50000% : 28846.368us 00:09:02.691 99.90000% : 30951.942us 00:09:02.691 99.99000% : 31373.057us 00:09:02.691 99.99900% : 31373.057us 00:09:02.691 99.99990% : 31373.057us 00:09:02.691 99.99999% : 31373.057us 00:09:02.691 00:09:02.691 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:02.692 ============================================================================== 00:09:02.692 Range in us Cumulative IO count 00:09:02.692 8422.297 - 8474.937: 0.0262% ( 3) 00:09:02.692 8474.937 - 8527.576: 0.2182% ( 22) 00:09:02.692 8527.576 - 8580.215: 0.3055% ( 10) 00:09:02.692 8580.215 - 8632.855: 0.4888% ( 21) 00:09:02.692 8632.855 - 8685.494: 0.7158% ( 26) 00:09:02.692 8685.494 - 8738.133: 0.9427% ( 26) 00:09:02.692 8738.133 - 8790.773: 1.3705% ( 49) 00:09:02.692 8790.773 - 8843.412: 2.0251% ( 75) 00:09:02.692 8843.412 - 8896.051: 3.0639% ( 119) 00:09:02.692 8896.051 - 8948.691: 4.5216% ( 167) 00:09:02.692 8948.691 - 9001.330: 5.8485% ( 152) 00:09:02.692 9001.330 - 9053.969: 7.1404% ( 148) 00:09:02.692 9053.969 - 9106.609: 8.5632% ( 163) 00:09:02.692 9106.609 - 9159.248: 9.7678% ( 138) 00:09:02.692 9159.248 - 9211.888: 11.1732% ( 161) 00:09:02.692 9211.888 - 9264.527: 12.6397% ( 168) 00:09:02.692 9264.527 - 9317.166: 14.1672% ( 175) 00:09:02.692 9317.166 - 9369.806: 16.0440% ( 215) 00:09:02.692 9369.806 - 9422.445: 17.7636% ( 197) 00:09:02.692 9422.445 - 9475.084: 19.6840% ( 220) 00:09:02.692 9475.084 - 9527.724: 21.3076% ( 186) 00:09:02.692 9527.724 - 9580.363: 23.6208% ( 265) 00:09:02.692 9580.363 - 9633.002: 26.0911% ( 283) 00:09:02.692 9633.002 - 9685.642: 28.8146% ( 312) 00:09:02.692 9685.642 - 9738.281: 31.4071% ( 297) 00:09:02.692 9738.281 - 9790.920: 33.9647% ( 293) 00:09:02.692 9790.920 - 9843.560: 36.2517% ( 262) 00:09:02.692 9843.560 - 9896.199: 38.6348% ( 273) 00:09:02.692 9896.199 - 9948.839: 40.9567% ( 266) 00:09:02.692 9948.839 - 10001.478: 43.3659% ( 276) 00:09:02.692 10001.478 - 10054.117: 45.8973% ( 290) 00:09:02.692 10054.117 - 10106.757: 48.0098% ( 242) 00:09:02.692 10106.757 - 10159.396: 49.9564% ( 223) 00:09:02.692 10159.396 - 10212.035: 51.9117% ( 224) 00:09:02.692 10212.035 - 10264.675: 54.0590% ( 246) 00:09:02.692 10264.675 - 10317.314: 55.9707% ( 219) 00:09:02.692 10317.314 - 10369.953: 57.7165% ( 200) 00:09:02.692 10369.953 - 10422.593: 59.3314% ( 185) 00:09:02.692 10422.593 - 10475.232: 60.7280% ( 160) 00:09:02.692 10475.232 - 10527.871: 62.0199% ( 148) 00:09:02.692 10527.871 - 10580.511: 63.2594% ( 142) 00:09:02.692 10580.511 - 10633.150: 64.4466% ( 136) 00:09:02.692 10633.150 - 10685.790: 65.4504% ( 115) 00:09:02.692 10685.790 - 10738.429: 66.4193% ( 111) 00:09:02.692 10738.429 - 10791.068: 67.2922% ( 100) 00:09:02.692 10791.068 - 10843.708: 68.0691% ( 89) 00:09:02.692 10843.708 - 10896.347: 68.9071% ( 96) 00:09:02.692 10896.347 - 10948.986: 69.8499% ( 108) 00:09:02.692 10948.986 - 11001.626: 70.7402% ( 102) 00:09:02.692 11001.626 - 11054.265: 71.4211% ( 78) 00:09:02.692 11054.265 - 11106.904: 72.1543% ( 84) 00:09:02.692 11106.904 - 11159.544: 72.7392% ( 67) 00:09:02.692 11159.544 - 11212.183: 73.3066% ( 65) 00:09:02.692 11212.183 - 11264.822: 73.8914% ( 67) 00:09:02.692 11264.822 - 11317.462: 74.4239% ( 61) 00:09:02.692 11317.462 - 11370.101: 74.8167% ( 45) 00:09:02.692 11370.101 - 11422.741: 75.1397% ( 37) 00:09:02.692 11422.741 - 11475.380: 75.5325% ( 45) 00:09:02.692 11475.380 - 11528.019: 75.8031% ( 31) 00:09:02.692 11528.019 - 11580.659: 76.2744% ( 54) 00:09:02.692 11580.659 - 11633.298: 76.6672% ( 45) 00:09:02.692 11633.298 - 11685.937: 77.1037% ( 50) 00:09:02.692 11685.937 - 11738.577: 77.4529% ( 40) 00:09:02.692 11738.577 - 11791.216: 77.8369% ( 44) 00:09:02.692 11791.216 - 11843.855: 78.1425% ( 35) 00:09:02.692 11843.855 - 11896.495: 78.5091% ( 42) 00:09:02.692 11896.495 - 11949.134: 79.0154% ( 58) 00:09:02.692 11949.134 - 12001.773: 79.4169% ( 46) 00:09:02.692 12001.773 - 12054.413: 79.6439% ( 26) 00:09:02.692 12054.413 - 12107.052: 79.8446% ( 23) 00:09:02.692 12107.052 - 12159.692: 80.1240% ( 32) 00:09:02.692 12159.692 - 12212.331: 80.3858% ( 30) 00:09:02.692 12212.331 - 12264.970: 80.5604% ( 20) 00:09:02.692 12264.970 - 12317.610: 80.7786% ( 25) 00:09:02.692 12317.610 - 12370.249: 81.0230% ( 28) 00:09:02.692 12370.249 - 12422.888: 81.3897% ( 42) 00:09:02.692 12422.888 - 12475.528: 81.6603% ( 31) 00:09:02.692 12475.528 - 12528.167: 81.9658% ( 35) 00:09:02.692 12528.167 - 12580.806: 82.2626% ( 34) 00:09:02.692 12580.806 - 12633.446: 82.5332% ( 31) 00:09:02.692 12633.446 - 12686.085: 82.7776% ( 28) 00:09:02.692 12686.085 - 12738.724: 83.0307% ( 29) 00:09:02.692 12738.724 - 12791.364: 83.3188% ( 33) 00:09:02.692 12791.364 - 12844.003: 83.5457% ( 26) 00:09:02.692 12844.003 - 12896.643: 83.7640% ( 25) 00:09:02.692 12896.643 - 12949.282: 84.0171% ( 29) 00:09:02.692 12949.282 - 13001.921: 84.3750% ( 41) 00:09:02.692 13001.921 - 13054.561: 84.7416% ( 42) 00:09:02.692 13054.561 - 13107.200: 85.0122% ( 31) 00:09:02.692 13107.200 - 13159.839: 85.2479% ( 27) 00:09:02.692 13159.839 - 13212.479: 85.4138% ( 19) 00:09:02.692 13212.479 - 13265.118: 85.5447% ( 15) 00:09:02.692 13265.118 - 13317.757: 85.7280% ( 21) 00:09:02.692 13317.757 - 13370.397: 85.9113% ( 21) 00:09:02.692 13370.397 - 13423.036: 86.0073% ( 11) 00:09:02.692 13423.036 - 13475.676: 86.1732% ( 19) 00:09:02.692 13475.676 - 13580.954: 86.4962% ( 37) 00:09:02.692 13580.954 - 13686.233: 86.8715% ( 43) 00:09:02.692 13686.233 - 13791.512: 87.1072% ( 27) 00:09:02.692 13791.512 - 13896.790: 87.3516% ( 28) 00:09:02.692 13896.790 - 14002.069: 87.6135% ( 30) 00:09:02.692 14002.069 - 14107.348: 87.8055% ( 22) 00:09:02.692 14107.348 - 14212.627: 87.9801% ( 20) 00:09:02.692 14212.627 - 14317.905: 88.2332% ( 29) 00:09:02.692 14317.905 - 14423.184: 88.5126% ( 32) 00:09:02.692 14423.184 - 14528.463: 88.6435% ( 15) 00:09:02.692 14528.463 - 14633.741: 88.8268% ( 21) 00:09:02.692 14633.741 - 14739.020: 89.0014% ( 20) 00:09:02.692 14739.020 - 14844.299: 89.3069% ( 35) 00:09:02.692 14844.299 - 14949.578: 89.5862% ( 32) 00:09:02.692 14949.578 - 15054.856: 90.0227% ( 50) 00:09:02.692 15054.856 - 15160.135: 90.3806% ( 41) 00:09:02.692 15160.135 - 15265.414: 90.6425% ( 30) 00:09:02.692 15265.414 - 15370.692: 90.9131% ( 31) 00:09:02.692 15370.692 - 15475.971: 91.1749% ( 30) 00:09:02.692 15475.971 - 15581.250: 91.4543% ( 32) 00:09:02.692 15581.250 - 15686.529: 91.8209% ( 42) 00:09:02.692 15686.529 - 15791.807: 92.2050% ( 44) 00:09:02.692 15791.807 - 15897.086: 92.4581% ( 29) 00:09:02.692 15897.086 - 16002.365: 92.6240% ( 19) 00:09:02.692 16002.365 - 16107.643: 92.7985% ( 20) 00:09:02.692 16107.643 - 16212.922: 92.9731% ( 20) 00:09:02.692 16212.922 - 16318.201: 93.0953% ( 14) 00:09:02.692 16318.201 - 16423.480: 93.1652% ( 8) 00:09:02.692 16423.480 - 16528.758: 93.3747% ( 24) 00:09:02.692 16528.758 - 16634.037: 93.6016% ( 26) 00:09:02.692 16634.037 - 16739.316: 93.7587% ( 18) 00:09:02.692 16739.316 - 16844.594: 93.9333% ( 20) 00:09:02.692 16844.594 - 16949.873: 94.1515% ( 25) 00:09:02.692 16949.873 - 17055.152: 94.3698% ( 25) 00:09:02.692 17055.152 - 17160.431: 94.6316% ( 30) 00:09:02.692 17160.431 - 17265.709: 94.9284% ( 34) 00:09:02.692 17265.709 - 17370.988: 95.2863% ( 41) 00:09:02.692 17370.988 - 17476.267: 95.7839% ( 57) 00:09:02.692 17476.267 - 17581.545: 96.2727% ( 56) 00:09:02.692 17581.545 - 17686.824: 96.6219% ( 40) 00:09:02.692 17686.824 - 17792.103: 96.9186% ( 34) 00:09:02.692 17792.103 - 17897.382: 97.1281% ( 24) 00:09:02.692 17897.382 - 18002.660: 97.3202% ( 22) 00:09:02.692 18002.660 - 18107.939: 97.4773% ( 18) 00:09:02.692 18107.939 - 18213.218: 97.6170% ( 16) 00:09:02.692 18213.218 - 18318.496: 97.6868% ( 8) 00:09:02.692 18318.496 - 18423.775: 97.7654% ( 9) 00:09:02.692 18423.775 - 18529.054: 97.8352% ( 8) 00:09:02.692 18529.054 - 18634.333: 97.9050% ( 8) 00:09:02.692 18634.333 - 18739.611: 97.9836% ( 9) 00:09:02.692 18739.611 - 18844.890: 98.0447% ( 7) 00:09:02.692 18844.890 - 18950.169: 98.0883% ( 5) 00:09:02.692 18950.169 - 19055.447: 98.1494% ( 7) 00:09:02.692 19055.447 - 19160.726: 98.1931% ( 5) 00:09:02.692 19160.726 - 19266.005: 98.2280% ( 4) 00:09:02.692 19266.005 - 19371.284: 98.2804% ( 6) 00:09:02.692 19371.284 - 19476.562: 98.3066% ( 3) 00:09:02.692 19476.562 - 19581.841: 98.3677% ( 7) 00:09:02.692 19581.841 - 19687.120: 98.4113% ( 5) 00:09:02.692 19687.120 - 19792.398: 98.4375% ( 3) 00:09:02.692 19792.398 - 19897.677: 98.4550% ( 2) 00:09:02.692 19897.677 - 20002.956: 98.4811% ( 3) 00:09:02.692 20002.956 - 20108.235: 98.5248% ( 5) 00:09:02.692 20108.235 - 20213.513: 98.5772% ( 6) 00:09:02.692 20213.513 - 20318.792: 98.6383% ( 7) 00:09:02.692 20318.792 - 20424.071: 98.6994% ( 7) 00:09:02.692 20424.071 - 20529.349: 98.7692% ( 8) 00:09:02.692 20529.349 - 20634.628: 98.8128% ( 5) 00:09:02.692 20634.628 - 20739.907: 98.8565% ( 5) 00:09:02.692 21476.858 - 21582.137: 98.8740% ( 2) 00:09:02.692 21582.137 - 21687.415: 98.8827% ( 1) 00:09:02.692 34320.861 - 34531.418: 98.8914% ( 1) 00:09:02.692 34531.418 - 34741.976: 98.9612% ( 8) 00:09:02.692 34741.976 - 34952.533: 99.0485% ( 10) 00:09:02.692 34952.533 - 35163.091: 99.1009% ( 6) 00:09:02.692 35163.091 - 35373.648: 99.1358% ( 4) 00:09:02.692 35373.648 - 35584.206: 99.1882% ( 6) 00:09:02.692 35584.206 - 35794.763: 99.2493% ( 7) 00:09:02.692 35794.763 - 36005.320: 99.2929% ( 5) 00:09:02.692 36005.320 - 36215.878: 99.3628% ( 8) 00:09:02.692 36215.878 - 36426.435: 99.4239% ( 7) 00:09:02.692 36426.435 - 36636.993: 99.4413% ( 2) 00:09:02.692 44427.618 - 44638.175: 99.4588% ( 2) 00:09:02.692 44638.175 - 44848.733: 99.5112% ( 6) 00:09:02.692 44848.733 - 45059.290: 99.5461% ( 4) 00:09:02.692 45059.290 - 45269.847: 99.5723% ( 3) 00:09:02.692 45269.847 - 45480.405: 99.6072% ( 4) 00:09:02.692 45480.405 - 45690.962: 99.6596% ( 6) 00:09:02.692 45690.962 - 45901.520: 99.6858% ( 3) 00:09:02.692 45901.520 - 46112.077: 99.7294% ( 5) 00:09:02.692 46112.077 - 46322.635: 99.7643% ( 4) 00:09:02.692 46322.635 - 46533.192: 99.8080% ( 5) 00:09:02.692 46533.192 - 46743.749: 99.8603% ( 6) 00:09:02.692 46743.749 - 46954.307: 99.8865% ( 3) 00:09:02.693 46954.307 - 47164.864: 99.9302% ( 5) 00:09:02.693 47164.864 - 47375.422: 99.9651% ( 4) 00:09:02.693 47375.422 - 47585.979: 100.0000% ( 4) 00:09:02.693 00:09:02.693 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:02.693 ============================================================================== 00:09:02.693 Range in us Cumulative IO count 00:09:02.693 8580.215 - 8632.855: 0.0175% ( 2) 00:09:02.693 8632.855 - 8685.494: 0.0262% ( 1) 00:09:02.693 8685.494 - 8738.133: 0.0873% ( 7) 00:09:02.693 8738.133 - 8790.773: 0.3753% ( 33) 00:09:02.693 8790.773 - 8843.412: 0.9078% ( 61) 00:09:02.693 8843.412 - 8896.051: 1.7458% ( 96) 00:09:02.693 8896.051 - 8948.691: 2.8457% ( 126) 00:09:02.693 8948.691 - 9001.330: 4.0677% ( 140) 00:09:02.693 9001.330 - 9053.969: 5.5168% ( 166) 00:09:02.693 9053.969 - 9106.609: 7.0618% ( 177) 00:09:02.693 9106.609 - 9159.248: 8.6330% ( 180) 00:09:02.693 9159.248 - 9211.888: 10.2392% ( 184) 00:09:02.693 9211.888 - 9264.527: 11.9937% ( 201) 00:09:02.693 9264.527 - 9317.166: 13.7133% ( 197) 00:09:02.693 9317.166 - 9369.806: 15.7297% ( 231) 00:09:02.693 9369.806 - 9422.445: 17.7811% ( 235) 00:09:02.693 9422.445 - 9475.084: 20.0506% ( 260) 00:09:02.693 9475.084 - 9527.724: 22.3027% ( 258) 00:09:02.693 9527.724 - 9580.363: 24.6945% ( 274) 00:09:02.693 9580.363 - 9633.002: 26.7895% ( 240) 00:09:02.693 9633.002 - 9685.642: 28.7709% ( 227) 00:09:02.693 9685.642 - 9738.281: 30.8834% ( 242) 00:09:02.693 9738.281 - 9790.920: 32.9696% ( 239) 00:09:02.693 9790.920 - 9843.560: 35.2043% ( 256) 00:09:02.693 9843.560 - 9896.199: 37.7270% ( 289) 00:09:02.693 9896.199 - 9948.839: 40.2235% ( 286) 00:09:02.693 9948.839 - 10001.478: 42.9033% ( 307) 00:09:02.693 10001.478 - 10054.117: 45.6617% ( 316) 00:09:02.693 10054.117 - 10106.757: 48.1320% ( 283) 00:09:02.693 10106.757 - 10159.396: 50.4365% ( 264) 00:09:02.693 10159.396 - 10212.035: 52.9242% ( 285) 00:09:02.693 10212.035 - 10264.675: 55.0978% ( 249) 00:09:02.693 10264.675 - 10317.314: 57.2888% ( 251) 00:09:02.693 10317.314 - 10369.953: 59.0171% ( 198) 00:09:02.693 10369.953 - 10422.593: 60.6669% ( 189) 00:09:02.693 10422.593 - 10475.232: 62.0199% ( 155) 00:09:02.693 10475.232 - 10527.871: 63.2594% ( 142) 00:09:02.693 10527.871 - 10580.511: 64.3069% ( 120) 00:09:02.693 10580.511 - 10633.150: 65.3457% ( 119) 00:09:02.693 10633.150 - 10685.790: 66.2535% ( 104) 00:09:02.693 10685.790 - 10738.429: 67.3272% ( 123) 00:09:02.693 10738.429 - 10791.068: 68.1564% ( 95) 00:09:02.693 10791.068 - 10843.708: 69.1341% ( 112) 00:09:02.693 10843.708 - 10896.347: 70.1205% ( 113) 00:09:02.693 10896.347 - 10948.986: 71.0807% ( 110) 00:09:02.693 10948.986 - 11001.626: 72.0059% ( 106) 00:09:02.693 11001.626 - 11054.265: 72.7828% ( 89) 00:09:02.693 11054.265 - 11106.904: 73.5073% ( 83) 00:09:02.693 11106.904 - 11159.544: 74.2406% ( 84) 00:09:02.693 11159.544 - 11212.183: 74.8341% ( 68) 00:09:02.693 11212.183 - 11264.822: 75.3928% ( 64) 00:09:02.693 11264.822 - 11317.462: 75.8293% ( 50) 00:09:02.693 11317.462 - 11370.101: 76.2483% ( 48) 00:09:02.693 11370.101 - 11422.741: 76.6323% ( 44) 00:09:02.693 11422.741 - 11475.380: 77.1037% ( 54) 00:09:02.693 11475.380 - 11528.019: 77.3219% ( 25) 00:09:02.693 11528.019 - 11580.659: 77.5227% ( 23) 00:09:02.693 11580.659 - 11633.298: 77.7060% ( 21) 00:09:02.693 11633.298 - 11685.937: 78.0115% ( 35) 00:09:02.693 11685.937 - 11738.577: 78.2472% ( 27) 00:09:02.693 11738.577 - 11791.216: 78.4742% ( 26) 00:09:02.693 11791.216 - 11843.855: 78.7709% ( 34) 00:09:02.693 11843.855 - 11896.495: 78.9717% ( 23) 00:09:02.693 11896.495 - 11949.134: 79.2161% ( 28) 00:09:02.693 11949.134 - 12001.773: 79.4169% ( 23) 00:09:02.693 12001.773 - 12054.413: 79.6351% ( 25) 00:09:02.693 12054.413 - 12107.052: 79.9494% ( 36) 00:09:02.693 12107.052 - 12159.692: 80.1851% ( 27) 00:09:02.693 12159.692 - 12212.331: 80.5080% ( 37) 00:09:02.693 12212.331 - 12264.970: 80.7263% ( 25) 00:09:02.693 12264.970 - 12317.610: 81.0318% ( 35) 00:09:02.693 12317.610 - 12370.249: 81.1976% ( 19) 00:09:02.693 12370.249 - 12422.888: 81.4944% ( 34) 00:09:02.693 12422.888 - 12475.528: 81.7563% ( 30) 00:09:02.693 12475.528 - 12528.167: 82.0356% ( 32) 00:09:02.693 12528.167 - 12580.806: 82.3673% ( 38) 00:09:02.693 12580.806 - 12633.446: 82.5244% ( 18) 00:09:02.693 12633.446 - 12686.085: 82.6990% ( 20) 00:09:02.693 12686.085 - 12738.724: 82.8038% ( 12) 00:09:02.693 12738.724 - 12791.364: 82.9434% ( 16) 00:09:02.693 12791.364 - 12844.003: 83.0744% ( 15) 00:09:02.693 12844.003 - 12896.643: 83.1878% ( 13) 00:09:02.693 12896.643 - 12949.282: 83.3275% ( 16) 00:09:02.693 12949.282 - 13001.921: 83.5370% ( 24) 00:09:02.693 13001.921 - 13054.561: 83.7465% ( 24) 00:09:02.693 13054.561 - 13107.200: 83.9298% ( 21) 00:09:02.693 13107.200 - 13159.839: 84.0782% ( 17) 00:09:02.693 13159.839 - 13212.479: 84.2703% ( 22) 00:09:02.693 13212.479 - 13265.118: 84.5583% ( 33) 00:09:02.693 13265.118 - 13317.757: 84.8464% ( 33) 00:09:02.693 13317.757 - 13370.397: 85.1606% ( 36) 00:09:02.693 13370.397 - 13423.036: 85.4836% ( 37) 00:09:02.693 13423.036 - 13475.676: 85.7193% ( 27) 00:09:02.693 13475.676 - 13580.954: 86.0946% ( 43) 00:09:02.693 13580.954 - 13686.233: 86.4962% ( 46) 00:09:02.693 13686.233 - 13791.512: 86.8191% ( 37) 00:09:02.693 13791.512 - 13896.790: 87.2119% ( 45) 00:09:02.693 13896.790 - 14002.069: 87.5000% ( 33) 00:09:02.693 14002.069 - 14107.348: 87.7095% ( 24) 00:09:02.693 14107.348 - 14212.627: 87.9103% ( 23) 00:09:02.693 14212.627 - 14317.905: 88.1372% ( 26) 00:09:02.693 14317.905 - 14423.184: 88.4602% ( 37) 00:09:02.693 14423.184 - 14528.463: 89.0101% ( 63) 00:09:02.693 14528.463 - 14633.741: 89.2196% ( 24) 00:09:02.693 14633.741 - 14739.020: 89.4117% ( 22) 00:09:02.693 14739.020 - 14844.299: 89.5775% ( 19) 00:09:02.693 14844.299 - 14949.578: 89.7346% ( 18) 00:09:02.693 14949.578 - 15054.856: 89.8307% ( 11) 00:09:02.693 15054.856 - 15160.135: 90.0751% ( 28) 00:09:02.693 15160.135 - 15265.414: 90.3544% ( 32) 00:09:02.693 15265.414 - 15370.692: 90.5639% ( 24) 00:09:02.693 15370.692 - 15475.971: 90.8083% ( 28) 00:09:02.693 15475.971 - 15581.250: 91.0615% ( 29) 00:09:02.693 15581.250 - 15686.529: 91.3233% ( 30) 00:09:02.693 15686.529 - 15791.807: 91.5590% ( 27) 00:09:02.693 15791.807 - 15897.086: 91.9082% ( 40) 00:09:02.693 15897.086 - 16002.365: 92.1700% ( 30) 00:09:02.693 16002.365 - 16107.643: 92.3708% ( 23) 00:09:02.693 16107.643 - 16212.922: 92.6414% ( 31) 00:09:02.693 16212.922 - 16318.201: 92.9120% ( 31) 00:09:02.693 16318.201 - 16423.480: 93.1477% ( 27) 00:09:02.693 16423.480 - 16528.758: 93.4445% ( 34) 00:09:02.693 16528.758 - 16634.037: 93.8460% ( 46) 00:09:02.693 16634.037 - 16739.316: 94.2563% ( 47) 00:09:02.693 16739.316 - 16844.594: 94.4309% ( 20) 00:09:02.693 16844.594 - 16949.873: 94.5531% ( 14) 00:09:02.693 16949.873 - 17055.152: 94.6927% ( 16) 00:09:02.693 17055.152 - 17160.431: 94.8237% ( 15) 00:09:02.693 17160.431 - 17265.709: 95.0157% ( 22) 00:09:02.693 17265.709 - 17370.988: 95.1990% ( 21) 00:09:02.693 17370.988 - 17476.267: 95.4260% ( 26) 00:09:02.693 17476.267 - 17581.545: 95.6180% ( 22) 00:09:02.693 17581.545 - 17686.824: 95.9061% ( 33) 00:09:02.693 17686.824 - 17792.103: 96.1243% ( 25) 00:09:02.693 17792.103 - 17897.382: 96.2203% ( 11) 00:09:02.693 17897.382 - 18002.660: 96.3600% ( 16) 00:09:02.693 18002.660 - 18107.939: 96.5608% ( 23) 00:09:02.693 18107.939 - 18213.218: 97.0583% ( 57) 00:09:02.693 18213.218 - 18318.496: 97.3202% ( 30) 00:09:02.693 18318.496 - 18423.775: 97.5646% ( 28) 00:09:02.693 18423.775 - 18529.054: 97.8701% ( 35) 00:09:02.693 18529.054 - 18634.333: 98.0622% ( 22) 00:09:02.693 18634.333 - 18739.611: 98.1494% ( 10) 00:09:02.693 18739.611 - 18844.890: 98.2367% ( 10) 00:09:02.693 18844.890 - 18950.169: 98.2716% ( 4) 00:09:02.693 18950.169 - 19055.447: 98.3066% ( 4) 00:09:02.693 19055.447 - 19160.726: 98.3240% ( 2) 00:09:02.693 19476.562 - 19581.841: 98.3328% ( 1) 00:09:02.693 19581.841 - 19687.120: 98.3415% ( 1) 00:09:02.693 19687.120 - 19792.398: 98.3677% ( 3) 00:09:02.693 19792.398 - 19897.677: 98.3939% ( 3) 00:09:02.693 19897.677 - 20002.956: 98.4200% ( 3) 00:09:02.693 20002.956 - 20108.235: 98.4462% ( 3) 00:09:02.693 20108.235 - 20213.513: 98.4724% ( 3) 00:09:02.693 20213.513 - 20318.792: 98.6732% ( 23) 00:09:02.693 20318.792 - 20424.071: 98.7867% ( 13) 00:09:02.693 20424.071 - 20529.349: 98.8041% ( 2) 00:09:02.693 20529.349 - 20634.628: 98.8303% ( 3) 00:09:02.693 20634.628 - 20739.907: 98.8565% ( 3) 00:09:02.693 20739.907 - 20845.186: 98.8652% ( 1) 00:09:02.693 20845.186 - 20950.464: 98.8827% ( 2) 00:09:02.693 32636.402 - 32846.959: 98.9089% ( 3) 00:09:02.693 32846.959 - 33057.516: 98.9525% ( 5) 00:09:02.693 33057.516 - 33268.074: 98.9962% ( 5) 00:09:02.693 33268.074 - 33478.631: 99.0311% ( 4) 00:09:02.693 33478.631 - 33689.189: 99.0747% ( 5) 00:09:02.693 33689.189 - 33899.746: 99.1184% ( 5) 00:09:02.693 33899.746 - 34110.304: 99.1533% ( 4) 00:09:02.693 34110.304 - 34320.861: 99.1969% ( 5) 00:09:02.693 34320.861 - 34531.418: 99.2318% ( 4) 00:09:02.693 34531.418 - 34741.976: 99.2755% ( 5) 00:09:02.693 34741.976 - 34952.533: 99.3191% ( 5) 00:09:02.693 34952.533 - 35163.091: 99.3628% ( 5) 00:09:02.693 35163.091 - 35373.648: 99.4064% ( 5) 00:09:02.693 35373.648 - 35584.206: 99.4413% ( 4) 00:09:02.693 43164.273 - 43374.831: 99.4675% ( 3) 00:09:02.693 43374.831 - 43585.388: 99.5199% ( 6) 00:09:02.693 43585.388 - 43795.945: 99.5635% ( 5) 00:09:02.693 43795.945 - 44006.503: 99.5985% ( 4) 00:09:02.693 44006.503 - 44217.060: 99.6421% ( 5) 00:09:02.693 44217.060 - 44427.618: 99.6858% ( 5) 00:09:02.693 44427.618 - 44638.175: 99.7207% ( 4) 00:09:02.693 44638.175 - 44848.733: 99.7556% ( 4) 00:09:02.693 44848.733 - 45059.290: 99.7992% ( 5) 00:09:02.693 45059.290 - 45269.847: 99.8429% ( 5) 00:09:02.693 45269.847 - 45480.405: 99.8865% ( 5) 00:09:02.693 45480.405 - 45690.962: 99.9302% ( 5) 00:09:02.693 45690.962 - 45901.520: 99.9738% ( 5) 00:09:02.693 45901.520 - 46112.077: 100.0000% ( 3) 00:09:02.694 00:09:02.694 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:02.694 ============================================================================== 00:09:02.694 Range in us Cumulative IO count 00:09:02.694 8580.215 - 8632.855: 0.0087% ( 1) 00:09:02.694 8632.855 - 8685.494: 0.1135% ( 12) 00:09:02.694 8685.494 - 8738.133: 0.2619% ( 17) 00:09:02.694 8738.133 - 8790.773: 0.4714% ( 24) 00:09:02.694 8790.773 - 8843.412: 0.7769% ( 35) 00:09:02.694 8843.412 - 8896.051: 1.7371% ( 110) 00:09:02.694 8896.051 - 8948.691: 2.7060% ( 111) 00:09:02.694 8948.691 - 9001.330: 3.7797% ( 123) 00:09:02.694 9001.330 - 9053.969: 4.8708% ( 125) 00:09:02.694 9053.969 - 9106.609: 6.9396% ( 237) 00:09:02.694 9106.609 - 9159.248: 8.3275% ( 159) 00:09:02.694 9159.248 - 9211.888: 9.8289% ( 172) 00:09:02.694 9211.888 - 9264.527: 11.4525% ( 186) 00:09:02.694 9264.527 - 9317.166: 13.0936% ( 188) 00:09:02.694 9317.166 - 9369.806: 14.8830% ( 205) 00:09:02.694 9369.806 - 9422.445: 16.7161% ( 210) 00:09:02.694 9422.445 - 9475.084: 18.6889% ( 226) 00:09:02.694 9475.084 - 9527.724: 20.9323% ( 257) 00:09:02.694 9527.724 - 9580.363: 23.3066% ( 272) 00:09:02.694 9580.363 - 9633.002: 25.5325% ( 255) 00:09:02.694 9633.002 - 9685.642: 27.7147% ( 250) 00:09:02.694 9685.642 - 9738.281: 30.0105% ( 263) 00:09:02.694 9738.281 - 9790.920: 32.7078% ( 309) 00:09:02.694 9790.920 - 9843.560: 35.3265% ( 300) 00:09:02.694 9843.560 - 9896.199: 38.2769% ( 338) 00:09:02.694 9896.199 - 9948.839: 40.8781% ( 298) 00:09:02.694 9948.839 - 10001.478: 43.7500% ( 329) 00:09:02.694 10001.478 - 10054.117: 46.3076% ( 293) 00:09:02.694 10054.117 - 10106.757: 48.7605% ( 281) 00:09:02.694 10106.757 - 10159.396: 50.9602% ( 252) 00:09:02.694 10159.396 - 10212.035: 52.9504% ( 228) 00:09:02.694 10212.035 - 10264.675: 55.1152% ( 248) 00:09:02.694 10264.675 - 10317.314: 57.2277% ( 242) 00:09:02.694 10317.314 - 10369.953: 58.9298% ( 195) 00:09:02.694 10369.953 - 10422.593: 60.2479% ( 151) 00:09:02.694 10422.593 - 10475.232: 61.5747% ( 152) 00:09:02.694 10475.232 - 10527.871: 62.6833% ( 127) 00:09:02.694 10527.871 - 10580.511: 63.8181% ( 130) 00:09:02.694 10580.511 - 10633.150: 64.5862% ( 88) 00:09:02.694 10633.150 - 10685.790: 65.4679% ( 101) 00:09:02.694 10685.790 - 10738.429: 66.2535% ( 90) 00:09:02.694 10738.429 - 10791.068: 66.9867% ( 84) 00:09:02.694 10791.068 - 10843.708: 67.7374% ( 86) 00:09:02.694 10843.708 - 10896.347: 68.5841% ( 97) 00:09:02.694 10896.347 - 10948.986: 69.4396% ( 98) 00:09:02.694 10948.986 - 11001.626: 70.3387% ( 103) 00:09:02.694 11001.626 - 11054.265: 71.3251% ( 113) 00:09:02.694 11054.265 - 11106.904: 72.0758% ( 86) 00:09:02.694 11106.904 - 11159.544: 72.9138% ( 96) 00:09:02.694 11159.544 - 11212.183: 73.5248% ( 70) 00:09:02.694 11212.183 - 11264.822: 74.3017% ( 89) 00:09:02.694 11264.822 - 11317.462: 74.8691% ( 65) 00:09:02.694 11317.462 - 11370.101: 75.7158% ( 97) 00:09:02.694 11370.101 - 11422.741: 76.2832% ( 65) 00:09:02.694 11422.741 - 11475.380: 76.8156% ( 61) 00:09:02.694 11475.380 - 11528.019: 77.4179% ( 69) 00:09:02.694 11528.019 - 11580.659: 77.7846% ( 42) 00:09:02.694 11580.659 - 11633.298: 78.0552% ( 31) 00:09:02.694 11633.298 - 11685.937: 78.2909% ( 27) 00:09:02.694 11685.937 - 11738.577: 78.5178% ( 26) 00:09:02.694 11738.577 - 11791.216: 78.6837% ( 19) 00:09:02.694 11791.216 - 11843.855: 78.8757% ( 22) 00:09:02.694 11843.855 - 11896.495: 79.0677% ( 22) 00:09:02.694 11896.495 - 11949.134: 79.4344% ( 42) 00:09:02.694 11949.134 - 12001.773: 79.6613% ( 26) 00:09:02.694 12001.773 - 12054.413: 79.8621% ( 23) 00:09:02.694 12054.413 - 12107.052: 80.0978% ( 27) 00:09:02.694 12107.052 - 12159.692: 80.4120% ( 36) 00:09:02.694 12159.692 - 12212.331: 80.6913% ( 32) 00:09:02.694 12212.331 - 12264.970: 80.9532% ( 30) 00:09:02.694 12264.970 - 12317.610: 81.1802% ( 26) 00:09:02.694 12317.610 - 12370.249: 81.2936% ( 13) 00:09:02.694 12370.249 - 12422.888: 81.4682% ( 20) 00:09:02.694 12422.888 - 12475.528: 81.7301% ( 30) 00:09:02.694 12475.528 - 12528.167: 82.0182% ( 33) 00:09:02.694 12528.167 - 12580.806: 82.2713% ( 29) 00:09:02.694 12580.806 - 12633.446: 82.4372% ( 19) 00:09:02.694 12633.446 - 12686.085: 82.6466% ( 24) 00:09:02.694 12686.085 - 12738.724: 82.8125% ( 19) 00:09:02.694 12738.724 - 12791.364: 83.0307% ( 25) 00:09:02.694 12791.364 - 12844.003: 83.1966% ( 19) 00:09:02.694 12844.003 - 12896.643: 83.3886% ( 22) 00:09:02.694 12896.643 - 12949.282: 83.5981% ( 24) 00:09:02.694 12949.282 - 13001.921: 83.7291% ( 15) 00:09:02.694 13001.921 - 13054.561: 83.9822% ( 29) 00:09:02.694 13054.561 - 13107.200: 84.0957% ( 13) 00:09:02.694 13107.200 - 13159.839: 84.2441% ( 17) 00:09:02.694 13159.839 - 13212.479: 84.4274% ( 21) 00:09:02.694 13212.479 - 13265.118: 84.6543% ( 26) 00:09:02.694 13265.118 - 13317.757: 84.9162% ( 30) 00:09:02.694 13317.757 - 13370.397: 85.1344% ( 25) 00:09:02.694 13370.397 - 13423.036: 85.3614% ( 26) 00:09:02.694 13423.036 - 13475.676: 85.6407% ( 32) 00:09:02.694 13475.676 - 13580.954: 86.0946% ( 52) 00:09:02.694 13580.954 - 13686.233: 86.4787% ( 44) 00:09:02.694 13686.233 - 13791.512: 86.8802% ( 46) 00:09:02.694 13791.512 - 13896.790: 87.0985% ( 25) 00:09:02.694 13896.790 - 14002.069: 87.3516% ( 29) 00:09:02.694 14002.069 - 14107.348: 87.7444% ( 45) 00:09:02.694 14107.348 - 14212.627: 88.2856% ( 62) 00:09:02.694 14212.627 - 14317.905: 88.6872% ( 46) 00:09:02.694 14317.905 - 14423.184: 88.9141% ( 26) 00:09:02.694 14423.184 - 14528.463: 89.0974% ( 21) 00:09:02.694 14528.463 - 14633.741: 89.2458% ( 17) 00:09:02.694 14633.741 - 14739.020: 89.4291% ( 21) 00:09:02.694 14739.020 - 14844.299: 89.5862% ( 18) 00:09:02.694 14844.299 - 14949.578: 89.9878% ( 46) 00:09:02.694 14949.578 - 15054.856: 90.1536% ( 19) 00:09:02.694 15054.856 - 15160.135: 90.2584% ( 12) 00:09:02.694 15160.135 - 15265.414: 90.3369% ( 9) 00:09:02.694 15265.414 - 15370.692: 90.4068% ( 8) 00:09:02.694 15370.692 - 15475.971: 90.5988% ( 22) 00:09:02.694 15475.971 - 15581.250: 90.8869% ( 33) 00:09:02.694 15581.250 - 15686.529: 91.4019% ( 59) 00:09:02.694 15686.529 - 15791.807: 91.8645% ( 53) 00:09:02.694 15791.807 - 15897.086: 92.4145% ( 63) 00:09:02.694 15897.086 - 16002.365: 92.7025% ( 33) 00:09:02.694 16002.365 - 16107.643: 92.9731% ( 31) 00:09:02.694 16107.643 - 16212.922: 93.2088% ( 27) 00:09:02.694 16212.922 - 16318.201: 93.5056% ( 34) 00:09:02.694 16318.201 - 16423.480: 93.9595% ( 52) 00:09:02.694 16423.480 - 16528.758: 94.3785% ( 48) 00:09:02.694 16528.758 - 16634.037: 94.5880% ( 24) 00:09:02.694 16634.037 - 16739.316: 94.7102% ( 14) 00:09:02.694 16739.316 - 16844.594: 94.8149% ( 12) 00:09:02.694 16844.594 - 16949.873: 94.9197% ( 12) 00:09:02.694 16949.873 - 17055.152: 95.0855% ( 19) 00:09:02.694 17055.152 - 17160.431: 95.2601% ( 20) 00:09:02.694 17160.431 - 17265.709: 95.4609% ( 23) 00:09:02.694 17265.709 - 17370.988: 95.6966% ( 27) 00:09:02.694 17370.988 - 17476.267: 95.9410% ( 28) 00:09:02.694 17476.267 - 17581.545: 96.1854% ( 28) 00:09:02.694 17581.545 - 17686.824: 96.3600% ( 20) 00:09:02.694 17686.824 - 17792.103: 96.5171% ( 18) 00:09:02.694 17792.103 - 17897.382: 96.7266% ( 24) 00:09:02.694 17897.382 - 18002.660: 96.9274% ( 23) 00:09:02.694 18002.660 - 18107.939: 97.1543% ( 26) 00:09:02.694 18107.939 - 18213.218: 97.3376% ( 21) 00:09:02.694 18213.218 - 18318.496: 97.4162% ( 9) 00:09:02.694 18318.496 - 18423.775: 97.4686% ( 6) 00:09:02.694 18423.775 - 18529.054: 97.5733% ( 12) 00:09:02.694 18529.054 - 18634.333: 97.7043% ( 15) 00:09:02.694 18634.333 - 18739.611: 97.8090% ( 12) 00:09:02.694 18739.611 - 18844.890: 98.0883% ( 32) 00:09:02.694 18844.890 - 18950.169: 98.1582% ( 8) 00:09:02.694 18950.169 - 19055.447: 98.2367% ( 9) 00:09:02.694 19055.447 - 19160.726: 98.2629% ( 3) 00:09:02.694 19160.726 - 19266.005: 98.2978% ( 4) 00:09:02.694 19266.005 - 19371.284: 98.3589% ( 7) 00:09:02.694 19371.284 - 19476.562: 98.4026% ( 5) 00:09:02.694 19476.562 - 19581.841: 98.4288% ( 3) 00:09:02.694 19581.841 - 19687.120: 98.4637% ( 4) 00:09:02.694 19687.120 - 19792.398: 98.4986% ( 4) 00:09:02.694 19792.398 - 19897.677: 98.5248% ( 3) 00:09:02.694 19897.677 - 20002.956: 98.5684% ( 5) 00:09:02.694 20002.956 - 20108.235: 98.7256% ( 18) 00:09:02.694 20108.235 - 20213.513: 98.7605% ( 4) 00:09:02.694 20213.513 - 20318.792: 98.7779% ( 2) 00:09:02.694 20318.792 - 20424.071: 98.7954% ( 2) 00:09:02.694 20424.071 - 20529.349: 98.8128% ( 2) 00:09:02.694 20529.349 - 20634.628: 98.8303% ( 2) 00:09:02.694 20634.628 - 20739.907: 98.8565% ( 3) 00:09:02.694 20739.907 - 20845.186: 98.8740% ( 2) 00:09:02.694 20845.186 - 20950.464: 98.8827% ( 1) 00:09:02.694 31583.614 - 31794.172: 98.9089% ( 3) 00:09:02.694 31794.172 - 32004.729: 98.9525% ( 5) 00:09:02.694 32004.729 - 32215.287: 98.9962% ( 5) 00:09:02.694 32215.287 - 32425.844: 99.0311% ( 4) 00:09:02.694 32425.844 - 32636.402: 99.0747% ( 5) 00:09:02.694 32636.402 - 32846.959: 99.1184% ( 5) 00:09:02.694 32846.959 - 33057.516: 99.1533% ( 4) 00:09:02.694 33057.516 - 33268.074: 99.1882% ( 4) 00:09:02.694 33268.074 - 33478.631: 99.2318% ( 5) 00:09:02.694 33478.631 - 33689.189: 99.2755% ( 5) 00:09:02.694 33689.189 - 33899.746: 99.3191% ( 5) 00:09:02.694 33899.746 - 34110.304: 99.3628% ( 5) 00:09:02.694 34110.304 - 34320.861: 99.4064% ( 5) 00:09:02.694 34320.861 - 34531.418: 99.4413% ( 4) 00:09:02.694 41900.929 - 42111.486: 99.4763% ( 4) 00:09:02.694 42111.486 - 42322.043: 99.5199% ( 5) 00:09:02.694 42322.043 - 42532.601: 99.5635% ( 5) 00:09:02.694 42532.601 - 42743.158: 99.5985% ( 4) 00:09:02.694 42743.158 - 42953.716: 99.6421% ( 5) 00:09:02.694 42953.716 - 43164.273: 99.6858% ( 5) 00:09:02.694 43164.273 - 43374.831: 99.7207% ( 4) 00:09:02.694 43374.831 - 43585.388: 99.7643% ( 5) 00:09:02.694 43585.388 - 43795.945: 99.8080% ( 5) 00:09:02.694 43795.945 - 44006.503: 99.8516% ( 5) 00:09:02.694 44006.503 - 44217.060: 99.8865% ( 4) 00:09:02.694 44217.060 - 44427.618: 99.9302% ( 5) 00:09:02.694 44427.618 - 44638.175: 99.9738% ( 5) 00:09:02.694 44638.175 - 44848.733: 100.0000% ( 3) 00:09:02.695 00:09:02.695 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:02.695 ============================================================================== 00:09:02.695 Range in us Cumulative IO count 00:09:02.695 8474.937 - 8527.576: 0.0087% ( 1) 00:09:02.695 8580.215 - 8632.855: 0.0175% ( 1) 00:09:02.695 8632.855 - 8685.494: 0.0611% ( 5) 00:09:02.695 8685.494 - 8738.133: 0.2531% ( 22) 00:09:02.695 8738.133 - 8790.773: 0.5499% ( 34) 00:09:02.695 8790.773 - 8843.412: 0.9689% ( 48) 00:09:02.695 8843.412 - 8896.051: 1.7109% ( 85) 00:09:02.695 8896.051 - 8948.691: 2.7235% ( 116) 00:09:02.695 8948.691 - 9001.330: 3.8757% ( 132) 00:09:02.695 9001.330 - 9053.969: 5.3596% ( 170) 00:09:02.695 9053.969 - 9106.609: 6.6690% ( 150) 00:09:02.695 9106.609 - 9159.248: 7.9784% ( 150) 00:09:02.695 9159.248 - 9211.888: 9.4012% ( 163) 00:09:02.695 9211.888 - 9264.527: 10.8240% ( 163) 00:09:02.695 9264.527 - 9317.166: 12.5000% ( 192) 00:09:02.695 9317.166 - 9369.806: 14.6648% ( 248) 00:09:02.695 9369.806 - 9422.445: 16.5590% ( 217) 00:09:02.695 9422.445 - 9475.084: 18.1652% ( 184) 00:09:02.695 9475.084 - 9527.724: 20.3736% ( 253) 00:09:02.695 9527.724 - 9580.363: 22.7217% ( 269) 00:09:02.695 9580.363 - 9633.002: 25.1309% ( 276) 00:09:02.695 9633.002 - 9685.642: 27.7322% ( 298) 00:09:02.695 9685.642 - 9738.281: 30.5691% ( 325) 00:09:02.695 9738.281 - 9790.920: 33.2402% ( 306) 00:09:02.695 9790.920 - 9843.560: 35.7280% ( 285) 00:09:02.695 9843.560 - 9896.199: 38.8094% ( 353) 00:09:02.695 9896.199 - 9948.839: 41.4804% ( 306) 00:09:02.695 9948.839 - 10001.478: 43.7587% ( 261) 00:09:02.695 10001.478 - 10054.117: 45.9846% ( 255) 00:09:02.695 10054.117 - 10106.757: 48.1145% ( 244) 00:09:02.695 10106.757 - 10159.396: 50.2182% ( 241) 00:09:02.695 10159.396 - 10212.035: 52.3219% ( 241) 00:09:02.695 10212.035 - 10264.675: 54.4867% ( 248) 00:09:02.695 10264.675 - 10317.314: 56.4595% ( 226) 00:09:02.695 10317.314 - 10369.953: 58.0395% ( 181) 00:09:02.695 10369.953 - 10422.593: 59.3925% ( 155) 00:09:02.695 10422.593 - 10475.232: 60.7978% ( 161) 00:09:02.695 10475.232 - 10527.871: 62.0985% ( 149) 00:09:02.695 10527.871 - 10580.511: 63.1285% ( 118) 00:09:02.695 10580.511 - 10633.150: 63.9490% ( 94) 00:09:02.695 10633.150 - 10685.790: 64.7434% ( 91) 00:09:02.695 10685.790 - 10738.429: 65.6774% ( 107) 00:09:02.695 10738.429 - 10791.068: 66.4106% ( 84) 00:09:02.695 10791.068 - 10843.708: 67.1526% ( 85) 00:09:02.695 10843.708 - 10896.347: 68.1215% ( 111) 00:09:02.695 10896.347 - 10948.986: 69.0642% ( 108) 00:09:02.695 10948.986 - 11001.626: 70.2165% ( 132) 00:09:02.695 11001.626 - 11054.265: 71.3251% ( 127) 00:09:02.695 11054.265 - 11106.904: 72.3027% ( 112) 00:09:02.695 11106.904 - 11159.544: 73.3066% ( 115) 00:09:02.695 11159.544 - 11212.183: 74.2493% ( 108) 00:09:02.695 11212.183 - 11264.822: 74.9738% ( 83) 00:09:02.695 11264.822 - 11317.462: 75.6634% ( 79) 00:09:02.695 11317.462 - 11370.101: 76.1697% ( 58) 00:09:02.695 11370.101 - 11422.741: 76.5625% ( 45) 00:09:02.695 11422.741 - 11475.380: 76.9117% ( 40) 00:09:02.695 11475.380 - 11528.019: 77.3394% ( 49) 00:09:02.695 11528.019 - 11580.659: 77.6711% ( 38) 00:09:02.695 11580.659 - 11633.298: 77.9766% ( 35) 00:09:02.695 11633.298 - 11685.937: 78.2821% ( 35) 00:09:02.695 11685.937 - 11738.577: 78.5440% ( 30) 00:09:02.695 11738.577 - 11791.216: 78.8932% ( 40) 00:09:02.695 11791.216 - 11843.855: 79.2161% ( 37) 00:09:02.695 11843.855 - 11896.495: 79.4867% ( 31) 00:09:02.695 11896.495 - 11949.134: 79.7050% ( 25) 00:09:02.695 11949.134 - 12001.773: 79.8883% ( 21) 00:09:02.695 12001.773 - 12054.413: 80.0803% ( 22) 00:09:02.695 12054.413 - 12107.052: 80.3771% ( 34) 00:09:02.695 12107.052 - 12159.692: 80.6739% ( 34) 00:09:02.695 12159.692 - 12212.331: 80.9270% ( 29) 00:09:02.695 12212.331 - 12264.970: 81.2849% ( 41) 00:09:02.695 12264.970 - 12317.610: 81.5992% ( 36) 00:09:02.695 12317.610 - 12370.249: 81.9309% ( 38) 00:09:02.695 12370.249 - 12422.888: 82.1666% ( 27) 00:09:02.695 12422.888 - 12475.528: 82.3499% ( 21) 00:09:02.695 12475.528 - 12528.167: 82.5855% ( 27) 00:09:02.695 12528.167 - 12580.806: 82.8823% ( 34) 00:09:02.695 12580.806 - 12633.446: 83.1355% ( 29) 00:09:02.695 12633.446 - 12686.085: 83.5457% ( 47) 00:09:02.695 12686.085 - 12738.724: 83.7552% ( 24) 00:09:02.695 12738.724 - 12791.364: 83.9997% ( 28) 00:09:02.695 12791.364 - 12844.003: 84.1830% ( 21) 00:09:02.695 12844.003 - 12896.643: 84.3314% ( 17) 00:09:02.695 12896.643 - 12949.282: 84.4710% ( 16) 00:09:02.695 12949.282 - 13001.921: 84.5845% ( 13) 00:09:02.695 13001.921 - 13054.561: 84.7154% ( 15) 00:09:02.695 13054.561 - 13107.200: 84.8551% ( 16) 00:09:02.695 13107.200 - 13159.839: 84.9773% ( 14) 00:09:02.695 13159.839 - 13212.479: 85.0559% ( 9) 00:09:02.695 13212.479 - 13265.118: 85.1257% ( 8) 00:09:02.695 13265.118 - 13317.757: 85.1781% ( 6) 00:09:02.695 13317.757 - 13370.397: 85.2217% ( 5) 00:09:02.695 13370.397 - 13423.036: 85.2392% ( 2) 00:09:02.695 13423.036 - 13475.676: 85.3177% ( 9) 00:09:02.695 13475.676 - 13580.954: 85.4923% ( 20) 00:09:02.695 13580.954 - 13686.233: 85.7105% ( 25) 00:09:02.695 13686.233 - 13791.512: 86.0248% ( 36) 00:09:02.695 13791.512 - 13896.790: 86.3740% ( 40) 00:09:02.695 13896.790 - 14002.069: 86.7057% ( 38) 00:09:02.695 14002.069 - 14107.348: 87.0897% ( 44) 00:09:02.695 14107.348 - 14212.627: 87.4825% ( 45) 00:09:02.695 14212.627 - 14317.905: 87.9539% ( 54) 00:09:02.695 14317.905 - 14423.184: 88.3467% ( 45) 00:09:02.695 14423.184 - 14528.463: 88.7919% ( 51) 00:09:02.695 14528.463 - 14633.741: 89.2633% ( 54) 00:09:02.695 14633.741 - 14739.020: 89.4640% ( 23) 00:09:02.695 14739.020 - 14844.299: 89.6386% ( 20) 00:09:02.695 14844.299 - 14949.578: 89.9529% ( 36) 00:09:02.695 14949.578 - 15054.856: 90.3893% ( 50) 00:09:02.695 15054.856 - 15160.135: 90.8432% ( 52) 00:09:02.695 15160.135 - 15265.414: 91.0876% ( 28) 00:09:02.695 15265.414 - 15370.692: 91.1749% ( 10) 00:09:02.695 15370.692 - 15475.971: 91.2448% ( 8) 00:09:02.695 15475.971 - 15581.250: 91.3670% ( 14) 00:09:02.695 15581.250 - 15686.529: 91.5328% ( 19) 00:09:02.695 15686.529 - 15791.807: 91.8994% ( 42) 00:09:02.695 15791.807 - 15897.086: 92.1962% ( 34) 00:09:02.695 15897.086 - 16002.365: 92.4406% ( 28) 00:09:02.695 16002.365 - 16107.643: 92.8509% ( 47) 00:09:02.695 16107.643 - 16212.922: 93.0604% ( 24) 00:09:02.695 16212.922 - 16318.201: 93.3747% ( 36) 00:09:02.695 16318.201 - 16423.480: 93.9333% ( 64) 00:09:02.695 16423.480 - 16528.758: 94.5356% ( 69) 00:09:02.695 16528.758 - 16634.037: 94.9110% ( 43) 00:09:02.695 16634.037 - 16739.316: 95.1117% ( 23) 00:09:02.695 16739.316 - 16844.594: 95.2601% ( 17) 00:09:02.695 16844.594 - 16949.873: 95.3911% ( 15) 00:09:02.695 16949.873 - 17055.152: 95.5045% ( 13) 00:09:02.695 17055.152 - 17160.431: 95.6180% ( 13) 00:09:02.695 17160.431 - 17265.709: 95.7402% ( 14) 00:09:02.695 17265.709 - 17370.988: 95.9323% ( 22) 00:09:02.695 17370.988 - 17476.267: 96.1330% ( 23) 00:09:02.695 17476.267 - 17581.545: 96.2902% ( 18) 00:09:02.695 17581.545 - 17686.824: 96.3862% ( 11) 00:09:02.695 17686.824 - 17792.103: 96.4822% ( 11) 00:09:02.695 17792.103 - 17897.382: 96.5608% ( 9) 00:09:02.695 17897.382 - 18002.660: 96.6393% ( 9) 00:09:02.695 18002.660 - 18107.939: 96.7179% ( 9) 00:09:02.695 18107.939 - 18213.218: 96.7703% ( 6) 00:09:02.695 18213.218 - 18318.496: 96.8401% ( 8) 00:09:02.695 18318.496 - 18423.775: 96.9710% ( 15) 00:09:02.695 18423.775 - 18529.054: 97.1194% ( 17) 00:09:02.695 18529.054 - 18634.333: 97.2678% ( 17) 00:09:02.695 18634.333 - 18739.611: 97.4249% ( 18) 00:09:02.695 18739.611 - 18844.890: 97.5297% ( 12) 00:09:02.695 18844.890 - 18950.169: 97.7130% ( 21) 00:09:02.695 18950.169 - 19055.447: 98.0272% ( 36) 00:09:02.695 19055.447 - 19160.726: 98.1145% ( 10) 00:09:02.695 19160.726 - 19266.005: 98.1844% ( 8) 00:09:02.695 19266.005 - 19371.284: 98.2716% ( 10) 00:09:02.695 19371.284 - 19476.562: 98.3240% ( 6) 00:09:02.695 19476.562 - 19581.841: 98.3589% ( 4) 00:09:02.695 19581.841 - 19687.120: 98.3851% ( 3) 00:09:02.695 19687.120 - 19792.398: 98.4113% ( 3) 00:09:02.695 19792.398 - 19897.677: 98.4462% ( 4) 00:09:02.695 19897.677 - 20002.956: 98.4811% ( 4) 00:09:02.695 20002.956 - 20108.235: 98.4986% ( 2) 00:09:02.695 20108.235 - 20213.513: 98.5335% ( 4) 00:09:02.695 20213.513 - 20318.792: 98.6994% ( 19) 00:09:02.695 20318.792 - 20424.071: 98.7779% ( 9) 00:09:02.695 20424.071 - 20529.349: 98.7954% ( 2) 00:09:02.695 20529.349 - 20634.628: 98.8128% ( 2) 00:09:02.695 20634.628 - 20739.907: 98.8390% ( 3) 00:09:02.695 20739.907 - 20845.186: 98.8565% ( 2) 00:09:02.695 20845.186 - 20950.464: 98.8740% ( 2) 00:09:02.695 20950.464 - 21055.743: 98.8827% ( 1) 00:09:02.695 30109.712 - 30320.270: 98.9001% ( 2) 00:09:02.695 30320.270 - 30530.827: 98.9438% ( 5) 00:09:02.695 30530.827 - 30741.385: 98.9962% ( 6) 00:09:02.695 30741.385 - 30951.942: 99.0398% ( 5) 00:09:02.695 30951.942 - 31162.500: 99.0834% ( 5) 00:09:02.695 31162.500 - 31373.057: 99.1271% ( 5) 00:09:02.695 31373.057 - 31583.614: 99.1707% ( 5) 00:09:02.695 31583.614 - 31794.172: 99.2144% ( 5) 00:09:02.695 31794.172 - 32004.729: 99.2580% ( 5) 00:09:02.695 32004.729 - 32215.287: 99.2929% ( 4) 00:09:02.695 32215.287 - 32425.844: 99.3453% ( 6) 00:09:02.695 32425.844 - 32636.402: 99.3890% ( 5) 00:09:02.695 32636.402 - 32846.959: 99.4326% ( 5) 00:09:02.695 32846.959 - 33057.516: 99.4413% ( 1) 00:09:02.695 40216.469 - 40427.027: 99.4588% ( 2) 00:09:02.695 40427.027 - 40637.584: 99.5024% ( 5) 00:09:02.696 40637.584 - 40848.141: 99.5461% ( 5) 00:09:02.696 40848.141 - 41058.699: 99.5897% ( 5) 00:09:02.696 41058.699 - 41269.256: 99.6334% ( 5) 00:09:02.696 41269.256 - 41479.814: 99.6770% ( 5) 00:09:02.696 41479.814 - 41690.371: 99.7207% ( 5) 00:09:02.696 41690.371 - 41900.929: 99.7556% ( 4) 00:09:02.696 41900.929 - 42111.486: 99.7992% ( 5) 00:09:02.696 42111.486 - 42322.043: 99.8429% ( 5) 00:09:02.696 42322.043 - 42532.601: 99.8778% ( 4) 00:09:02.696 42532.601 - 42743.158: 99.9214% ( 5) 00:09:02.696 42743.158 - 42953.716: 99.9651% ( 5) 00:09:02.696 42953.716 - 43164.273: 100.0000% ( 4) 00:09:02.696 00:09:02.696 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:02.696 ============================================================================== 00:09:02.696 Range in us Cumulative IO count 00:09:02.696 8369.658 - 8422.297: 0.0087% ( 1) 00:09:02.696 8580.215 - 8632.855: 0.0349% ( 3) 00:09:02.696 8632.855 - 8685.494: 0.1484% ( 13) 00:09:02.696 8685.494 - 8738.133: 0.3230% ( 20) 00:09:02.696 8738.133 - 8790.773: 0.6285% ( 35) 00:09:02.696 8790.773 - 8843.412: 1.1173% ( 56) 00:09:02.696 8843.412 - 8896.051: 1.8680% ( 86) 00:09:02.696 8896.051 - 8948.691: 2.6013% ( 84) 00:09:02.696 8948.691 - 9001.330: 3.6924% ( 125) 00:09:02.696 9001.330 - 9053.969: 5.1851% ( 171) 00:09:02.696 9053.969 - 9106.609: 6.5293% ( 154) 00:09:02.696 9106.609 - 9159.248: 8.6068% ( 238) 00:09:02.696 9159.248 - 9211.888: 10.5883% ( 227) 00:09:02.696 9211.888 - 9264.527: 12.4564% ( 214) 00:09:02.696 9264.527 - 9317.166: 14.3331% ( 215) 00:09:02.696 9317.166 - 9369.806: 15.8083% ( 169) 00:09:02.696 9369.806 - 9422.445: 17.2573% ( 166) 00:09:02.696 9422.445 - 9475.084: 18.7151% ( 167) 00:09:02.696 9475.084 - 9527.724: 20.5220% ( 207) 00:09:02.696 9527.724 - 9580.363: 22.3551% ( 210) 00:09:02.696 9580.363 - 9633.002: 24.4501% ( 240) 00:09:02.696 9633.002 - 9685.642: 27.2434% ( 320) 00:09:02.696 9685.642 - 9738.281: 30.0890% ( 326) 00:09:02.696 9738.281 - 9790.920: 32.9260% ( 325) 00:09:02.696 9790.920 - 9843.560: 35.8240% ( 332) 00:09:02.696 9843.560 - 9896.199: 38.5737% ( 315) 00:09:02.696 9896.199 - 9948.839: 41.1313% ( 293) 00:09:02.696 9948.839 - 10001.478: 43.4270% ( 263) 00:09:02.696 10001.478 - 10054.117: 45.7402% ( 265) 00:09:02.696 10054.117 - 10106.757: 48.0447% ( 264) 00:09:02.696 10106.757 - 10159.396: 50.0873% ( 234) 00:09:02.696 10159.396 - 10212.035: 52.4092% ( 266) 00:09:02.696 10212.035 - 10264.675: 54.6177% ( 253) 00:09:02.696 10264.675 - 10317.314: 56.3024% ( 193) 00:09:02.696 10317.314 - 10369.953: 57.9609% ( 190) 00:09:02.696 10369.953 - 10422.593: 59.5758% ( 185) 00:09:02.696 10422.593 - 10475.232: 60.7978% ( 140) 00:09:02.696 10475.232 - 10527.871: 61.8890% ( 125) 00:09:02.696 10527.871 - 10580.511: 62.9714% ( 124) 00:09:02.696 10580.511 - 10633.150: 63.9490% ( 112) 00:09:02.696 10633.150 - 10685.790: 65.1274% ( 135) 00:09:02.696 10685.790 - 10738.429: 66.1138% ( 113) 00:09:02.696 10738.429 - 10791.068: 66.9256% ( 93) 00:09:02.696 10791.068 - 10843.708: 67.9207% ( 114) 00:09:02.696 10843.708 - 10896.347: 68.6365% ( 82) 00:09:02.696 10896.347 - 10948.986: 69.3523% ( 82) 00:09:02.696 10948.986 - 11001.626: 69.9808% ( 72) 00:09:02.696 11001.626 - 11054.265: 70.6704% ( 79) 00:09:02.696 11054.265 - 11106.904: 71.3513% ( 78) 00:09:02.696 11106.904 - 11159.544: 71.9186% ( 65) 00:09:02.696 11159.544 - 11212.183: 72.4948% ( 66) 00:09:02.696 11212.183 - 11264.822: 73.0447% ( 63) 00:09:02.696 11264.822 - 11317.462: 73.6645% ( 71) 00:09:02.696 11317.462 - 11370.101: 74.2144% ( 63) 00:09:02.696 11370.101 - 11422.741: 74.8254% ( 70) 00:09:02.696 11422.741 - 11475.380: 75.3492% ( 60) 00:09:02.696 11475.380 - 11528.019: 75.9427% ( 68) 00:09:02.696 11528.019 - 11580.659: 76.5974% ( 75) 00:09:02.696 11580.659 - 11633.298: 77.1212% ( 60) 00:09:02.696 11633.298 - 11685.937: 77.6013% ( 55) 00:09:02.696 11685.937 - 11738.577: 77.9853% ( 44) 00:09:02.696 11738.577 - 11791.216: 78.5789% ( 68) 00:09:02.696 11791.216 - 11843.855: 79.0328% ( 52) 00:09:02.696 11843.855 - 11896.495: 79.4693% ( 50) 00:09:02.696 11896.495 - 11949.134: 79.8970% ( 49) 00:09:02.696 11949.134 - 12001.773: 80.3073% ( 47) 00:09:02.696 12001.773 - 12054.413: 80.7699% ( 53) 00:09:02.696 12054.413 - 12107.052: 81.1540% ( 44) 00:09:02.696 12107.052 - 12159.692: 81.4944% ( 39) 00:09:02.696 12159.692 - 12212.331: 81.7737% ( 32) 00:09:02.696 12212.331 - 12264.970: 82.0356% ( 30) 00:09:02.696 12264.970 - 12317.610: 82.4197% ( 44) 00:09:02.696 12317.610 - 12370.249: 82.7776% ( 41) 00:09:02.696 12370.249 - 12422.888: 83.1442% ( 42) 00:09:02.696 12422.888 - 12475.528: 83.4934% ( 40) 00:09:02.696 12475.528 - 12528.167: 83.6592% ( 19) 00:09:02.696 12528.167 - 12580.806: 83.8338% ( 20) 00:09:02.696 12580.806 - 12633.446: 84.0346% ( 23) 00:09:02.696 12633.446 - 12686.085: 84.2004% ( 19) 00:09:02.696 12686.085 - 12738.724: 84.3488% ( 17) 00:09:02.696 12738.724 - 12791.364: 84.4885% ( 16) 00:09:02.696 12791.364 - 12844.003: 84.6020% ( 13) 00:09:02.696 12844.003 - 12896.643: 84.6980% ( 11) 00:09:02.696 12896.643 - 12949.282: 84.7678% ( 8) 00:09:02.696 12949.282 - 13001.921: 84.8464% ( 9) 00:09:02.696 13001.921 - 13054.561: 84.8987% ( 6) 00:09:02.696 13054.561 - 13107.200: 84.9773% ( 9) 00:09:02.696 13107.200 - 13159.839: 85.0646% ( 10) 00:09:02.696 13159.839 - 13212.479: 85.1868% ( 14) 00:09:02.696 13212.479 - 13265.118: 85.2828% ( 11) 00:09:02.696 13265.118 - 13317.757: 85.4050% ( 14) 00:09:02.696 13317.757 - 13370.397: 85.5010% ( 11) 00:09:02.696 13370.397 - 13423.036: 85.5709% ( 8) 00:09:02.696 13423.036 - 13475.676: 85.6756% ( 12) 00:09:02.696 13475.676 - 13580.954: 85.8677% ( 22) 00:09:02.696 13580.954 - 13686.233: 86.0510% ( 21) 00:09:02.696 13686.233 - 13791.512: 86.1557% ( 12) 00:09:02.696 13791.512 - 13896.790: 86.3128% ( 18) 00:09:02.696 13896.790 - 14002.069: 86.5485% ( 27) 00:09:02.696 14002.069 - 14107.348: 86.8802% ( 38) 00:09:02.696 14107.348 - 14212.627: 87.2207% ( 39) 00:09:02.696 14212.627 - 14317.905: 87.7619% ( 62) 00:09:02.696 14317.905 - 14423.184: 88.4253% ( 76) 00:09:02.696 14423.184 - 14528.463: 88.8530% ( 49) 00:09:02.696 14528.463 - 14633.741: 89.2109% ( 41) 00:09:02.696 14633.741 - 14739.020: 89.5164% ( 35) 00:09:02.696 14739.020 - 14844.299: 89.8568% ( 39) 00:09:02.696 14844.299 - 14949.578: 90.1711% ( 36) 00:09:02.696 14949.578 - 15054.856: 90.4417% ( 31) 00:09:02.696 15054.856 - 15160.135: 90.6774% ( 27) 00:09:02.696 15160.135 - 15265.414: 90.9043% ( 26) 00:09:02.696 15265.414 - 15370.692: 91.0964% ( 22) 00:09:02.696 15370.692 - 15475.971: 91.3670% ( 31) 00:09:02.696 15475.971 - 15581.250: 91.4717% ( 12) 00:09:02.696 15581.250 - 15686.529: 91.7860% ( 36) 00:09:02.696 15686.529 - 15791.807: 92.1177% ( 38) 00:09:02.696 15791.807 - 15897.086: 92.5454% ( 49) 00:09:02.696 15897.086 - 16002.365: 92.8422% ( 34) 00:09:02.696 16002.365 - 16107.643: 93.0604% ( 25) 00:09:02.696 16107.643 - 16212.922: 93.2350% ( 20) 00:09:02.696 16212.922 - 16318.201: 93.3223% ( 10) 00:09:02.696 16318.201 - 16423.480: 93.4445% ( 14) 00:09:02.696 16423.480 - 16528.758: 93.6278% ( 21) 00:09:02.696 16528.758 - 16634.037: 94.0904% ( 53) 00:09:02.696 16634.037 - 16739.316: 94.7277% ( 73) 00:09:02.696 16739.316 - 16844.594: 95.1903% ( 53) 00:09:02.696 16844.594 - 16949.873: 95.5045% ( 36) 00:09:02.696 16949.873 - 17055.152: 95.6355% ( 15) 00:09:02.696 17055.152 - 17160.431: 95.7577% ( 14) 00:09:02.696 17160.431 - 17265.709: 95.8362% ( 9) 00:09:02.696 17265.709 - 17370.988: 95.9235% ( 10) 00:09:02.696 17370.988 - 17476.267: 95.9934% ( 8) 00:09:02.696 17476.267 - 17581.545: 96.0545% ( 7) 00:09:02.696 17581.545 - 17686.824: 96.0981% ( 5) 00:09:02.696 17686.824 - 17792.103: 96.1767% ( 9) 00:09:02.696 17792.103 - 17897.382: 96.2902% ( 13) 00:09:02.696 17897.382 - 18002.660: 96.4124% ( 14) 00:09:02.696 18002.660 - 18107.939: 96.5171% ( 12) 00:09:02.696 18107.939 - 18213.218: 96.6480% ( 15) 00:09:02.696 18213.218 - 18318.496: 96.7353% ( 10) 00:09:02.696 18318.496 - 18423.775: 96.8401% ( 12) 00:09:02.696 18423.775 - 18529.054: 97.0583% ( 25) 00:09:02.696 18529.054 - 18634.333: 97.2067% ( 17) 00:09:02.696 18634.333 - 18739.611: 97.3638% ( 18) 00:09:02.696 18739.611 - 18844.890: 97.4773% ( 13) 00:09:02.696 18844.890 - 18950.169: 97.5821% ( 12) 00:09:02.696 18950.169 - 19055.447: 97.6432% ( 7) 00:09:02.696 19055.447 - 19160.726: 97.7741% ( 15) 00:09:02.696 19160.726 - 19266.005: 97.8876% ( 13) 00:09:02.696 19266.005 - 19371.284: 98.0622% ( 20) 00:09:02.696 19371.284 - 19476.562: 98.1582% ( 11) 00:09:02.696 19476.562 - 19581.841: 98.2018% ( 5) 00:09:02.696 19581.841 - 19687.120: 98.2367% ( 4) 00:09:02.696 19687.120 - 19792.398: 98.2716% ( 4) 00:09:02.696 19792.398 - 19897.677: 98.3066% ( 4) 00:09:02.696 19897.677 - 20002.956: 98.3415% ( 4) 00:09:02.696 20002.956 - 20108.235: 98.3764% ( 4) 00:09:02.697 20108.235 - 20213.513: 98.4026% ( 3) 00:09:02.697 20213.513 - 20318.792: 98.4288% ( 3) 00:09:02.697 20318.792 - 20424.071: 98.4724% ( 5) 00:09:02.697 20424.071 - 20529.349: 98.5946% ( 14) 00:09:02.697 20529.349 - 20634.628: 98.8478% ( 29) 00:09:02.697 20634.628 - 20739.907: 98.8652% ( 2) 00:09:02.697 20739.907 - 20845.186: 98.8827% ( 2) 00:09:02.697 28635.810 - 28846.368: 98.9176% ( 4) 00:09:02.697 28846.368 - 29056.925: 98.9612% ( 5) 00:09:02.697 29056.925 - 29267.483: 99.0049% ( 5) 00:09:02.697 29267.483 - 29478.040: 99.0485% ( 5) 00:09:02.697 29478.040 - 29688.598: 99.0922% ( 5) 00:09:02.697 29688.598 - 29899.155: 99.1358% ( 5) 00:09:02.697 29899.155 - 30109.712: 99.1882% ( 6) 00:09:02.697 30109.712 - 30320.270: 99.2318% ( 5) 00:09:02.697 30320.270 - 30530.827: 99.2668% ( 4) 00:09:02.697 30530.827 - 30741.385: 99.3104% ( 5) 00:09:02.697 30741.385 - 30951.942: 99.3453% ( 4) 00:09:02.697 30951.942 - 31162.500: 99.3977% ( 6) 00:09:02.697 31162.500 - 31373.057: 99.4326% ( 4) 00:09:02.697 31373.057 - 31583.614: 99.4413% ( 1) 00:09:02.697 38742.567 - 38953.124: 99.4675% ( 3) 00:09:02.697 38953.124 - 39163.682: 99.5112% ( 5) 00:09:02.697 39163.682 - 39374.239: 99.5461% ( 4) 00:09:02.697 39374.239 - 39584.797: 99.5810% ( 4) 00:09:02.697 39584.797 - 39795.354: 99.6247% ( 5) 00:09:02.697 39795.354 - 40005.912: 99.6683% ( 5) 00:09:02.697 40005.912 - 40216.469: 99.7119% ( 5) 00:09:02.697 40216.469 - 40427.027: 99.7556% ( 5) 00:09:02.697 40427.027 - 40637.584: 99.7905% ( 4) 00:09:02.697 40637.584 - 40848.141: 99.8254% ( 4) 00:09:02.697 40848.141 - 41058.699: 99.8691% ( 5) 00:09:02.697 41058.699 - 41269.256: 99.9127% ( 5) 00:09:02.697 41269.256 - 41479.814: 99.9651% ( 6) 00:09:02.697 41479.814 - 41690.371: 100.0000% ( 4) 00:09:02.697 00:09:02.697 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:02.697 ============================================================================== 00:09:02.697 Range in us Cumulative IO count 00:09:02.697 8422.297 - 8474.937: 0.0087% ( 1) 00:09:02.697 8474.937 - 8527.576: 0.0174% ( 1) 00:09:02.697 8527.576 - 8580.215: 0.0347% ( 2) 00:09:02.697 8580.215 - 8632.855: 0.0955% ( 7) 00:09:02.697 8632.855 - 8685.494: 0.1736% ( 9) 00:09:02.697 8685.494 - 8738.133: 0.3125% ( 16) 00:09:02.697 8738.133 - 8790.773: 0.6424% ( 38) 00:09:02.697 8790.773 - 8843.412: 1.0156% ( 43) 00:09:02.697 8843.412 - 8896.051: 1.5278% ( 59) 00:09:02.697 8896.051 - 8948.691: 2.4653% ( 108) 00:09:02.697 8948.691 - 9001.330: 3.5503% ( 125) 00:09:02.697 9001.330 - 9053.969: 4.7309% ( 136) 00:09:02.697 9053.969 - 9106.609: 6.3976% ( 192) 00:09:02.697 9106.609 - 9159.248: 8.1250% ( 199) 00:09:02.697 9159.248 - 9211.888: 9.8524% ( 199) 00:09:02.697 9211.888 - 9264.527: 11.7014% ( 213) 00:09:02.697 9264.527 - 9317.166: 13.2986% ( 184) 00:09:02.697 9317.166 - 9369.806: 14.9392% ( 189) 00:09:02.697 9369.806 - 9422.445: 16.7188% ( 205) 00:09:02.697 9422.445 - 9475.084: 18.6458% ( 222) 00:09:02.697 9475.084 - 9527.724: 20.4340% ( 206) 00:09:02.697 9527.724 - 9580.363: 22.9080% ( 285) 00:09:02.697 9580.363 - 9633.002: 25.6337% ( 314) 00:09:02.697 9633.002 - 9685.642: 28.0035% ( 273) 00:09:02.697 9685.642 - 9738.281: 30.2170% ( 255) 00:09:02.697 9738.281 - 9790.920: 32.5347% ( 267) 00:09:02.697 9790.920 - 9843.560: 34.9392% ( 277) 00:09:02.697 9843.560 - 9896.199: 37.3351% ( 276) 00:09:02.697 9896.199 - 9948.839: 39.7569% ( 279) 00:09:02.697 9948.839 - 10001.478: 42.6823% ( 337) 00:09:02.697 10001.478 - 10054.117: 45.7118% ( 349) 00:09:02.697 10054.117 - 10106.757: 48.4635% ( 317) 00:09:02.697 10106.757 - 10159.396: 51.2326% ( 319) 00:09:02.697 10159.396 - 10212.035: 53.5677% ( 269) 00:09:02.697 10212.035 - 10264.675: 55.7465% ( 251) 00:09:02.697 10264.675 - 10317.314: 57.1875% ( 166) 00:09:02.697 10317.314 - 10369.953: 58.5330% ( 155) 00:09:02.697 10369.953 - 10422.593: 59.8611% ( 153) 00:09:02.697 10422.593 - 10475.232: 61.0851% ( 141) 00:09:02.697 10475.232 - 10527.871: 62.0573% ( 112) 00:09:02.697 10527.871 - 10580.511: 63.3420% ( 148) 00:09:02.697 10580.511 - 10633.150: 64.2795% ( 108) 00:09:02.697 10633.150 - 10685.790: 65.2344% ( 110) 00:09:02.697 10685.790 - 10738.429: 66.2674% ( 119) 00:09:02.697 10738.429 - 10791.068: 67.0920% ( 95) 00:09:02.697 10791.068 - 10843.708: 68.1076% ( 117) 00:09:02.697 10843.708 - 10896.347: 68.8368% ( 84) 00:09:02.697 10896.347 - 10948.986: 69.5399% ( 81) 00:09:02.697 10948.986 - 11001.626: 70.1736% ( 73) 00:09:02.697 11001.626 - 11054.265: 70.7378% ( 65) 00:09:02.697 11054.265 - 11106.904: 71.1024% ( 42) 00:09:02.697 11106.904 - 11159.544: 71.6580% ( 64) 00:09:02.697 11159.544 - 11212.183: 72.2049% ( 63) 00:09:02.697 11212.183 - 11264.822: 72.8559% ( 75) 00:09:02.697 11264.822 - 11317.462: 73.4635% ( 70) 00:09:02.697 11317.462 - 11370.101: 74.2361% ( 89) 00:09:02.697 11370.101 - 11422.741: 74.7483% ( 59) 00:09:02.697 11422.741 - 11475.380: 75.2431% ( 57) 00:09:02.697 11475.380 - 11528.019: 75.8941% ( 75) 00:09:02.697 11528.019 - 11580.659: 76.3715% ( 55) 00:09:02.697 11580.659 - 11633.298: 76.7448% ( 43) 00:09:02.697 11633.298 - 11685.937: 77.0052% ( 30) 00:09:02.697 11685.937 - 11738.577: 77.2917% ( 33) 00:09:02.697 11738.577 - 11791.216: 77.6302% ( 39) 00:09:02.697 11791.216 - 11843.855: 78.0295% ( 46) 00:09:02.697 11843.855 - 11896.495: 78.4462% ( 48) 00:09:02.697 11896.495 - 11949.134: 78.8281% ( 44) 00:09:02.697 11949.134 - 12001.773: 79.2535% ( 49) 00:09:02.697 12001.773 - 12054.413: 79.6528% ( 46) 00:09:02.697 12054.413 - 12107.052: 80.0000% ( 40) 00:09:02.697 12107.052 - 12159.692: 80.2604% ( 30) 00:09:02.697 12159.692 - 12212.331: 80.4340% ( 20) 00:09:02.697 12212.331 - 12264.970: 80.5816% ( 17) 00:09:02.697 12264.970 - 12317.610: 80.7552% ( 20) 00:09:02.697 12317.610 - 12370.249: 80.9288% ( 20) 00:09:02.697 12370.249 - 12422.888: 81.3715% ( 51) 00:09:02.697 12422.888 - 12475.528: 81.6927% ( 37) 00:09:02.697 12475.528 - 12528.167: 81.9705% ( 32) 00:09:02.697 12528.167 - 12580.806: 82.3177% ( 40) 00:09:02.697 12580.806 - 12633.446: 82.6562% ( 39) 00:09:02.697 12633.446 - 12686.085: 82.9253% ( 31) 00:09:02.697 12686.085 - 12738.724: 83.2552% ( 38) 00:09:02.697 12738.724 - 12791.364: 83.7153% ( 53) 00:09:02.697 12791.364 - 12844.003: 84.1667% ( 52) 00:09:02.697 12844.003 - 12896.643: 84.4444% ( 32) 00:09:02.697 12896.643 - 12949.282: 84.6094% ( 19) 00:09:02.697 12949.282 - 13001.921: 84.7309% ( 14) 00:09:02.697 13001.921 - 13054.561: 84.8438% ( 13) 00:09:02.697 13054.561 - 13107.200: 84.9132% ( 8) 00:09:02.697 13107.200 - 13159.839: 84.9653% ( 6) 00:09:02.697 13159.839 - 13212.479: 84.9826% ( 2) 00:09:02.697 13212.479 - 13265.118: 85.0260% ( 5) 00:09:02.697 13265.118 - 13317.757: 85.0781% ( 6) 00:09:02.697 13317.757 - 13370.397: 85.1649% ( 10) 00:09:02.697 13370.397 - 13423.036: 85.2691% ( 12) 00:09:02.697 13423.036 - 13475.676: 85.3472% ( 9) 00:09:02.697 13475.676 - 13580.954: 85.5816% ( 27) 00:09:02.697 13580.954 - 13686.233: 85.8333% ( 29) 00:09:02.697 13686.233 - 13791.512: 86.0851% ( 29) 00:09:02.697 13791.512 - 13896.790: 86.5278% ( 51) 00:09:02.697 13896.790 - 14002.069: 86.9531% ( 49) 00:09:02.697 14002.069 - 14107.348: 87.3524% ( 46) 00:09:02.697 14107.348 - 14212.627: 87.6649% ( 36) 00:09:02.697 14212.627 - 14317.905: 87.9167% ( 29) 00:09:02.697 14317.905 - 14423.184: 88.0642% ( 17) 00:09:02.697 14423.184 - 14528.463: 88.1858% ( 14) 00:09:02.697 14528.463 - 14633.741: 88.3767% ( 22) 00:09:02.697 14633.741 - 14739.020: 88.5764% ( 23) 00:09:02.697 14739.020 - 14844.299: 88.8976% ( 37) 00:09:02.697 14844.299 - 14949.578: 89.1840% ( 33) 00:09:02.697 14949.578 - 15054.856: 89.5312% ( 40) 00:09:02.697 15054.856 - 15160.135: 90.0000% ( 54) 00:09:02.697 15160.135 - 15265.414: 90.3559% ( 41) 00:09:02.697 15265.414 - 15370.692: 90.7899% ( 50) 00:09:02.697 15370.692 - 15475.971: 91.1806% ( 45) 00:09:02.697 15475.971 - 15581.250: 91.3976% ( 25) 00:09:02.697 15581.250 - 15686.529: 91.6059% ( 24) 00:09:02.697 15686.529 - 15791.807: 91.8056% ( 23) 00:09:02.697 15791.807 - 15897.086: 92.0399% ( 27) 00:09:02.697 15897.086 - 16002.365: 92.3177% ( 32) 00:09:02.697 16002.365 - 16107.643: 92.5000% ( 21) 00:09:02.697 16107.643 - 16212.922: 92.6215% ( 14) 00:09:02.697 16212.922 - 16318.201: 92.6649% ( 5) 00:09:02.697 16318.201 - 16423.480: 92.7691% ( 12) 00:09:02.697 16423.480 - 16528.758: 92.8733% ( 12) 00:09:02.697 16528.758 - 16634.037: 93.0295% ( 18) 00:09:02.697 16634.037 - 16739.316: 93.3073% ( 32) 00:09:02.697 16739.316 - 16844.594: 93.5069% ( 23) 00:09:02.697 16844.594 - 16949.873: 93.8194% ( 36) 00:09:02.697 16949.873 - 17055.152: 94.5660% ( 86) 00:09:02.697 17055.152 - 17160.431: 95.0174% ( 52) 00:09:02.697 17160.431 - 17265.709: 95.3906% ( 43) 00:09:02.697 17265.709 - 17370.988: 95.8333% ( 51) 00:09:02.697 17370.988 - 17476.267: 96.1632% ( 38) 00:09:02.697 17476.267 - 17581.545: 96.4323% ( 31) 00:09:02.697 17581.545 - 17686.824: 96.6493% ( 25) 00:09:02.697 17686.824 - 17792.103: 96.7535% ( 12) 00:09:02.697 17792.103 - 17897.382: 96.8316% ( 9) 00:09:02.697 17897.382 - 18002.660: 96.9010% ( 8) 00:09:02.697 18002.660 - 18107.939: 96.9878% ( 10) 00:09:02.697 18107.939 - 18213.218: 97.0920% ( 12) 00:09:02.697 18213.218 - 18318.496: 97.1875% ( 11) 00:09:02.697 18318.496 - 18423.775: 97.2917% ( 12) 00:09:02.697 18423.775 - 18529.054: 97.3177% ( 3) 00:09:02.697 18529.054 - 18634.333: 97.3438% ( 3) 00:09:02.697 18634.333 - 18739.611: 97.3785% ( 4) 00:09:02.697 18739.611 - 18844.890: 97.4132% ( 4) 00:09:02.697 18844.890 - 18950.169: 97.4653% ( 6) 00:09:02.697 18950.169 - 19055.447: 97.5174% ( 6) 00:09:02.697 19055.447 - 19160.726: 97.5781% ( 7) 00:09:02.697 19160.726 - 19266.005: 97.6649% ( 10) 00:09:02.697 19266.005 - 19371.284: 97.7517% ( 10) 00:09:02.697 19371.284 - 19476.562: 97.8299% ( 9) 00:09:02.697 19476.562 - 19581.841: 97.9167% ( 10) 00:09:02.697 19581.841 - 19687.120: 98.0035% ( 10) 00:09:02.697 19687.120 - 19792.398: 98.0990% ( 11) 00:09:02.697 19792.398 - 19897.677: 98.2552% ( 18) 00:09:02.698 19897.677 - 20002.956: 98.4028% ( 17) 00:09:02.698 20002.956 - 20108.235: 98.5069% ( 12) 00:09:02.698 20108.235 - 20213.513: 98.6285% ( 14) 00:09:02.698 20213.513 - 20318.792: 98.7240% ( 11) 00:09:02.698 20318.792 - 20424.071: 98.7934% ( 8) 00:09:02.698 20424.071 - 20529.349: 98.8628% ( 8) 00:09:02.698 20529.349 - 20634.628: 98.9323% ( 8) 00:09:02.698 20634.628 - 20739.907: 99.2708% ( 39) 00:09:02.698 20739.907 - 20845.186: 99.3142% ( 5) 00:09:02.698 20845.186 - 20950.464: 99.3576% ( 5) 00:09:02.698 20950.464 - 21055.743: 99.4184% ( 7) 00:09:02.698 21055.743 - 21161.022: 99.4358% ( 2) 00:09:02.698 21161.022 - 21266.300: 99.4444% ( 1) 00:09:02.698 28425.253 - 28635.810: 99.4618% ( 2) 00:09:02.698 28635.810 - 28846.368: 99.5052% ( 5) 00:09:02.698 28846.368 - 29056.925: 99.5573% ( 6) 00:09:02.698 29056.925 - 29267.483: 99.6007% ( 5) 00:09:02.698 29267.483 - 29478.040: 99.6441% ( 5) 00:09:02.698 29478.040 - 29688.598: 99.6875% ( 5) 00:09:02.698 29688.598 - 29899.155: 99.7309% ( 5) 00:09:02.698 29899.155 - 30109.712: 99.7569% ( 3) 00:09:02.698 30109.712 - 30320.270: 99.8003% ( 5) 00:09:02.698 30320.270 - 30530.827: 99.8438% ( 5) 00:09:02.698 30530.827 - 30741.385: 99.8872% ( 5) 00:09:02.698 30741.385 - 30951.942: 99.9306% ( 5) 00:09:02.698 30951.942 - 31162.500: 99.9740% ( 5) 00:09:02.698 31162.500 - 31373.057: 100.0000% ( 3) 00:09:02.698 00:09:02.698 13:06:54 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:02.698 00:09:02.698 real 0m2.717s 00:09:02.698 user 0m2.298s 00:09:02.698 sys 0m0.317s 00:09:02.698 13:06:54 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.698 ************************************ 00:09:02.698 END TEST nvme_perf 00:09:02.698 ************************************ 00:09:02.698 13:06:54 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 13:06:54 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:02.698 13:06:54 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:02.698 13:06:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.698 13:06:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 ************************************ 00:09:02.698 START TEST nvme_hello_world 00:09:02.698 ************************************ 00:09:02.698 13:06:54 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:02.957 Initializing NVMe Controllers 00:09:02.957 Attached to 0000:00:10.0 00:09:02.957 Namespace ID: 1 size: 6GB 00:09:02.957 Attached to 0000:00:11.0 00:09:02.957 Namespace ID: 1 size: 5GB 00:09:02.957 Attached to 0000:00:13.0 00:09:02.957 Namespace ID: 1 size: 1GB 00:09:02.957 Attached to 0000:00:12.0 00:09:02.957 Namespace ID: 1 size: 4GB 00:09:02.957 Namespace ID: 2 size: 4GB 00:09:02.957 Namespace ID: 3 size: 4GB 00:09:02.957 Initialization complete. 00:09:02.957 INFO: using host memory buffer for IO 00:09:02.957 Hello world! 00:09:02.957 INFO: using host memory buffer for IO 00:09:02.957 Hello world! 00:09:02.957 INFO: using host memory buffer for IO 00:09:02.957 Hello world! 00:09:02.957 INFO: using host memory buffer for IO 00:09:02.957 Hello world! 00:09:02.957 INFO: using host memory buffer for IO 00:09:02.957 Hello world! 00:09:02.957 INFO: using host memory buffer for IO 00:09:02.957 Hello world! 00:09:02.957 00:09:02.957 real 0m0.325s 00:09:02.957 user 0m0.119s 00:09:02.957 sys 0m0.157s 00:09:02.957 13:06:54 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.957 ************************************ 00:09:02.957 END TEST nvme_hello_world 00:09:02.957 ************************************ 00:09:02.957 13:06:54 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:02.957 13:06:54 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:02.957 13:06:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.957 13:06:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.957 13:06:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.957 ************************************ 00:09:02.957 START TEST nvme_sgl 00:09:02.957 ************************************ 00:09:02.957 13:06:54 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:03.220 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:03.220 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:03.220 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:03.479 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:03.479 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:03.479 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:03.479 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:03.479 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:03.479 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:03.479 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:03.479 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:03.479 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:03.479 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:03.479 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:03.479 NVMe Readv/Writev Request test 00:09:03.479 Attached to 0000:00:10.0 00:09:03.479 Attached to 0000:00:11.0 00:09:03.479 Attached to 0000:00:13.0 00:09:03.479 Attached to 0000:00:12.0 00:09:03.479 0000:00:10.0: build_io_request_2 test passed 00:09:03.479 0000:00:10.0: build_io_request_4 test passed 00:09:03.479 0000:00:10.0: build_io_request_5 test passed 00:09:03.479 0000:00:10.0: build_io_request_6 test passed 00:09:03.479 0000:00:10.0: build_io_request_7 test passed 00:09:03.479 0000:00:10.0: build_io_request_10 test passed 00:09:03.479 0000:00:11.0: build_io_request_2 test passed 00:09:03.479 0000:00:11.0: build_io_request_4 test passed 00:09:03.479 0000:00:11.0: build_io_request_5 test passed 00:09:03.479 0000:00:11.0: build_io_request_6 test passed 00:09:03.479 0000:00:11.0: build_io_request_7 test passed 00:09:03.479 0000:00:11.0: build_io_request_10 test passed 00:09:03.479 Cleaning up... 00:09:03.479 00:09:03.479 real 0m0.372s 00:09:03.479 user 0m0.163s 00:09:03.479 sys 0m0.168s 00:09:03.479 13:06:54 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.479 13:06:54 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:03.479 ************************************ 00:09:03.479 END TEST nvme_sgl 00:09:03.479 ************************************ 00:09:03.479 13:06:54 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:03.479 13:06:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.479 13:06:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.479 13:06:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.479 ************************************ 00:09:03.479 START TEST nvme_e2edp 00:09:03.479 ************************************ 00:09:03.479 13:06:54 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:03.746 NVMe Write/Read with End-to-End data protection test 00:09:03.746 Attached to 0000:00:10.0 00:09:03.746 Attached to 0000:00:11.0 00:09:03.746 Attached to 0000:00:13.0 00:09:03.746 Attached to 0000:00:12.0 00:09:03.746 Cleaning up... 00:09:03.746 00:09:03.746 real 0m0.301s 00:09:03.746 user 0m0.114s 00:09:03.746 sys 0m0.144s 00:09:03.746 13:06:55 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.746 13:06:55 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:03.746 ************************************ 00:09:03.746 END TEST nvme_e2edp 00:09:03.746 ************************************ 00:09:03.746 13:06:55 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:03.746 13:06:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.746 13:06:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.746 13:06:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 ************************************ 00:09:04.003 START TEST nvme_reserve 00:09:04.003 ************************************ 00:09:04.003 13:06:55 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:04.262 ===================================================== 00:09:04.262 NVMe Controller at PCI bus 0, device 16, function 0 00:09:04.262 ===================================================== 00:09:04.262 Reservations: Not Supported 00:09:04.262 ===================================================== 00:09:04.262 NVMe Controller at PCI bus 0, device 17, function 0 00:09:04.262 ===================================================== 00:09:04.262 Reservations: Not Supported 00:09:04.262 ===================================================== 00:09:04.262 NVMe Controller at PCI bus 0, device 19, function 0 00:09:04.262 ===================================================== 00:09:04.262 Reservations: Not Supported 00:09:04.262 ===================================================== 00:09:04.262 NVMe Controller at PCI bus 0, device 18, function 0 00:09:04.262 ===================================================== 00:09:04.262 Reservations: Not Supported 00:09:04.262 Reservation test passed 00:09:04.262 00:09:04.262 real 0m0.295s 00:09:04.262 user 0m0.111s 00:09:04.262 sys 0m0.144s 00:09:04.262 13:06:55 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.262 13:06:55 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:04.262 ************************************ 00:09:04.262 END TEST nvme_reserve 00:09:04.262 ************************************ 00:09:04.262 13:06:55 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:04.262 13:06:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.262 13:06:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.262 13:06:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.262 ************************************ 00:09:04.262 START TEST nvme_err_injection 00:09:04.262 ************************************ 00:09:04.262 13:06:55 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:04.521 NVMe Error Injection test 00:09:04.521 Attached to 0000:00:10.0 00:09:04.521 Attached to 0000:00:11.0 00:09:04.521 Attached to 0000:00:13.0 00:09:04.521 Attached to 0000:00:12.0 00:09:04.521 0000:00:11.0: get features failed as expected 00:09:04.521 0000:00:13.0: get features failed as expected 00:09:04.521 0000:00:12.0: get features failed as expected 00:09:04.521 0000:00:10.0: get features failed as expected 00:09:04.521 0000:00:11.0: get features successfully as expected 00:09:04.521 0000:00:13.0: get features successfully as expected 00:09:04.521 0000:00:12.0: get features successfully as expected 00:09:04.521 0000:00:10.0: get features successfully as expected 00:09:04.521 0000:00:10.0: read failed as expected 00:09:04.521 0000:00:11.0: read failed as expected 00:09:04.521 0000:00:13.0: read failed as expected 00:09:04.521 0000:00:12.0: read failed as expected 00:09:04.521 0000:00:10.0: read successfully as expected 00:09:04.521 0000:00:11.0: read successfully as expected 00:09:04.521 0000:00:13.0: read successfully as expected 00:09:04.521 0000:00:12.0: read successfully as expected 00:09:04.521 Cleaning up... 00:09:04.521 00:09:04.521 real 0m0.310s 00:09:04.521 user 0m0.101s 00:09:04.521 sys 0m0.163s 00:09:04.521 13:06:56 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.521 13:06:56 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:04.521 ************************************ 00:09:04.521 END TEST nvme_err_injection 00:09:04.521 ************************************ 00:09:04.521 13:06:56 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:04.521 13:06:56 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:04.521 13:06:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.521 13:06:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.521 ************************************ 00:09:04.521 START TEST nvme_overhead 00:09:04.521 ************************************ 00:09:04.521 13:06:56 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:05.899 Initializing NVMe Controllers 00:09:05.899 Attached to 0000:00:10.0 00:09:05.899 Attached to 0000:00:11.0 00:09:05.899 Attached to 0000:00:13.0 00:09:05.899 Attached to 0000:00:12.0 00:09:05.899 Initialization complete. Launching workers. 00:09:05.899 submit (in ns) avg, min, max = 14097.8, 11520.5, 83400.8 00:09:05.899 complete (in ns) avg, min, max = 8158.6, 7810.4, 55251.4 00:09:05.899 00:09:05.899 Submit histogram 00:09:05.899 ================ 00:09:05.899 Range in us Cumulative Count 00:09:05.899 11.515 - 11.566: 0.0167% ( 1) 00:09:05.899 11.978 - 12.029: 0.0334% ( 1) 00:09:05.899 12.389 - 12.440: 0.0501% ( 1) 00:09:05.899 12.903 - 12.954: 0.0668% ( 1) 00:09:05.899 13.108 - 13.160: 0.0836% ( 1) 00:09:05.899 13.160 - 13.263: 0.5849% ( 30) 00:09:05.899 13.263 - 13.365: 3.6765% ( 185) 00:09:05.899 13.365 - 13.468: 12.8844% ( 551) 00:09:05.899 13.468 - 13.571: 25.0501% ( 728) 00:09:05.899 13.571 - 13.674: 38.3356% ( 795) 00:09:05.899 13.674 - 13.777: 50.4679% ( 726) 00:09:05.899 13.777 - 13.880: 62.3496% ( 711) 00:09:05.899 13.880 - 13.982: 73.4960% ( 667) 00:09:05.899 13.982 - 14.085: 81.7179% ( 492) 00:09:05.899 14.085 - 14.188: 87.2326% ( 330) 00:09:05.899 14.188 - 14.291: 90.7921% ( 213) 00:09:05.900 14.291 - 14.394: 92.8476% ( 123) 00:09:05.900 14.394 - 14.496: 94.2179% ( 82) 00:09:05.900 14.496 - 14.599: 94.7861% ( 34) 00:09:05.900 14.599 - 14.702: 94.9031% ( 7) 00:09:05.900 14.702 - 14.805: 94.9699% ( 4) 00:09:05.900 14.805 - 14.908: 95.0033% ( 2) 00:09:05.900 15.113 - 15.216: 95.0535% ( 3) 00:09:05.900 15.524 - 15.627: 95.0702% ( 1) 00:09:05.900 15.730 - 15.833: 95.0869% ( 1) 00:09:05.900 16.039 - 16.141: 95.1036% ( 1) 00:09:05.900 16.141 - 16.244: 95.1203% ( 1) 00:09:05.900 16.758 - 16.861: 95.1370% ( 1) 00:09:05.900 17.067 - 17.169: 95.1537% ( 1) 00:09:05.900 17.272 - 17.375: 95.1705% ( 1) 00:09:05.900 17.478 - 17.581: 95.2039% ( 2) 00:09:05.900 17.581 - 17.684: 95.2707% ( 4) 00:09:05.900 17.684 - 17.786: 95.3376% ( 4) 00:09:05.900 17.786 - 17.889: 95.4044% ( 4) 00:09:05.900 17.889 - 17.992: 95.5381% ( 8) 00:09:05.900 17.992 - 18.095: 95.7052% ( 10) 00:09:05.900 18.095 - 18.198: 95.8222% ( 7) 00:09:05.900 18.198 - 18.300: 96.0729% ( 15) 00:09:05.900 18.300 - 18.403: 96.3570% ( 17) 00:09:05.900 18.403 - 18.506: 96.6410% ( 17) 00:09:05.900 18.506 - 18.609: 96.8583% ( 13) 00:09:05.900 18.609 - 18.712: 97.1424% ( 17) 00:09:05.900 18.712 - 18.814: 97.4432% ( 18) 00:09:05.900 18.814 - 18.917: 97.6103% ( 10) 00:09:05.900 18.917 - 19.020: 97.8108% ( 12) 00:09:05.900 19.020 - 19.123: 98.0448% ( 14) 00:09:05.900 19.123 - 19.226: 98.2119% ( 10) 00:09:05.900 19.226 - 19.329: 98.2955% ( 5) 00:09:05.900 19.329 - 19.431: 98.4291% ( 8) 00:09:05.900 19.431 - 19.534: 98.5294% ( 6) 00:09:05.900 19.534 - 19.637: 98.6130% ( 5) 00:09:05.900 19.637 - 19.740: 98.6631% ( 3) 00:09:05.900 19.740 - 19.843: 98.7132% ( 3) 00:09:05.900 19.843 - 19.945: 98.7968% ( 5) 00:09:05.900 19.945 - 20.048: 98.8469% ( 3) 00:09:05.900 20.151 - 20.254: 98.9472% ( 6) 00:09:05.900 20.254 - 20.357: 98.9973% ( 3) 00:09:05.900 20.357 - 20.459: 99.0307% ( 2) 00:09:05.900 20.459 - 20.562: 99.0475% ( 1) 00:09:05.900 20.562 - 20.665: 99.1143% ( 4) 00:09:05.900 20.665 - 20.768: 99.1477% ( 2) 00:09:05.900 20.768 - 20.871: 99.1644% ( 1) 00:09:05.900 20.871 - 20.973: 99.1979% ( 2) 00:09:05.900 20.973 - 21.076: 99.2146% ( 1) 00:09:05.900 21.076 - 21.179: 99.2480% ( 2) 00:09:05.900 21.385 - 21.488: 99.2981% ( 3) 00:09:05.900 21.590 - 21.693: 99.3148% ( 1) 00:09:05.900 21.796 - 21.899: 99.3817% ( 4) 00:09:05.900 21.899 - 22.002: 99.4318% ( 3) 00:09:05.900 22.002 - 22.104: 99.4652% ( 2) 00:09:05.900 22.104 - 22.207: 99.5154% ( 3) 00:09:05.900 22.310 - 22.413: 99.5822% ( 4) 00:09:05.900 22.413 - 22.516: 99.5989% ( 1) 00:09:05.900 22.516 - 22.618: 99.6156% ( 1) 00:09:05.900 22.618 - 22.721: 99.6324% ( 1) 00:09:05.900 22.824 - 22.927: 99.6491% ( 1) 00:09:05.900 23.338 - 23.441: 99.6658% ( 1) 00:09:05.900 23.544 - 23.647: 99.6825% ( 1) 00:09:05.900 23.955 - 24.058: 99.6992% ( 1) 00:09:05.900 24.161 - 24.263: 99.7159% ( 1) 00:09:05.900 24.572 - 24.675: 99.7493% ( 2) 00:09:05.900 24.675 - 24.778: 99.7660% ( 1) 00:09:05.900 25.703 - 25.806: 99.8162% ( 3) 00:09:05.900 26.011 - 26.114: 99.8329% ( 1) 00:09:05.900 26.937 - 27.142: 99.8496% ( 1) 00:09:05.900 31.255 - 31.460: 99.8663% ( 1) 00:09:05.900 37.835 - 38.040: 99.8830% ( 1) 00:09:05.900 41.947 - 42.153: 99.8997% ( 1) 00:09:05.900 42.358 - 42.564: 99.9164% ( 1) 00:09:05.900 44.826 - 45.031: 99.9332% ( 1) 00:09:05.900 46.265 - 46.471: 99.9499% ( 1) 00:09:05.900 46.471 - 46.676: 99.9666% ( 1) 00:09:05.900 78.137 - 78.548: 99.9833% ( 1) 00:09:05.900 83.071 - 83.483: 100.0000% ( 1) 00:09:05.900 00:09:05.900 Complete histogram 00:09:05.900 ================== 00:09:05.900 Range in us Cumulative Count 00:09:05.900 7.762 - 7.814: 0.0167% ( 1) 00:09:05.900 7.814 - 7.865: 1.0194% ( 60) 00:09:05.900 7.865 - 7.916: 7.7206% ( 401) 00:09:05.900 7.916 - 7.968: 24.5488% ( 1007) 00:09:05.900 7.968 - 8.019: 49.1644% ( 1473) 00:09:05.900 8.019 - 8.071: 68.4826% ( 1156) 00:09:05.900 8.071 - 8.122: 80.6651% ( 729) 00:09:05.900 8.122 - 8.173: 88.0849% ( 444) 00:09:05.900 8.173 - 8.225: 92.3630% ( 256) 00:09:05.900 8.225 - 8.276: 94.5521% ( 131) 00:09:05.900 8.276 - 8.328: 95.6384% ( 65) 00:09:05.900 8.328 - 8.379: 96.1731% ( 32) 00:09:05.900 8.379 - 8.431: 96.5575% ( 23) 00:09:05.900 8.431 - 8.482: 96.9251% ( 22) 00:09:05.900 8.482 - 8.533: 97.3429% ( 25) 00:09:05.900 8.533 - 8.585: 97.7273% ( 23) 00:09:05.900 8.585 - 8.636: 98.0949% ( 22) 00:09:05.900 8.636 - 8.688: 98.2787% ( 11) 00:09:05.900 8.688 - 8.739: 98.3456% ( 4) 00:09:05.900 8.739 - 8.790: 98.4124% ( 4) 00:09:05.900 8.790 - 8.842: 98.4291% ( 1) 00:09:05.900 8.842 - 8.893: 98.4960% ( 4) 00:09:05.900 8.893 - 8.945: 98.5127% ( 1) 00:09:05.900 8.945 - 8.996: 98.5294% ( 1) 00:09:05.900 9.047 - 9.099: 98.5461% ( 1) 00:09:05.900 9.150 - 9.202: 98.5795% ( 2) 00:09:05.900 9.202 - 9.253: 98.5963% ( 1) 00:09:05.900 9.716 - 9.767: 98.6130% ( 1) 00:09:05.900 11.515 - 11.566: 98.6631% ( 3) 00:09:05.900 11.772 - 11.823: 98.6798% ( 1) 00:09:05.900 12.132 - 12.183: 98.7132% ( 2) 00:09:05.900 12.286 - 12.337: 98.7299% ( 1) 00:09:05.900 12.543 - 12.594: 98.7467% ( 1) 00:09:05.900 13.006 - 13.057: 98.7634% ( 1) 00:09:05.900 13.108 - 13.160: 98.7801% ( 1) 00:09:05.900 13.160 - 13.263: 98.8636% ( 5) 00:09:05.900 13.263 - 13.365: 98.9138% ( 3) 00:09:05.900 13.365 - 13.468: 99.0140% ( 6) 00:09:05.900 13.468 - 13.571: 99.1811% ( 10) 00:09:05.900 13.571 - 13.674: 99.2981% ( 7) 00:09:05.900 13.674 - 13.777: 99.3650% ( 4) 00:09:05.900 13.777 - 13.880: 99.3817% ( 1) 00:09:05.900 13.880 - 13.982: 99.4151% ( 2) 00:09:05.900 13.982 - 14.085: 99.4820% ( 4) 00:09:05.900 14.188 - 14.291: 99.5655% ( 5) 00:09:05.900 14.291 - 14.394: 99.6156% ( 3) 00:09:05.900 14.394 - 14.496: 99.6324% ( 1) 00:09:05.900 14.496 - 14.599: 99.6491% ( 1) 00:09:05.900 14.702 - 14.805: 99.7159% ( 4) 00:09:05.900 14.805 - 14.908: 99.7326% ( 1) 00:09:05.900 15.010 - 15.113: 99.7493% ( 1) 00:09:05.900 16.141 - 16.244: 99.7660% ( 1) 00:09:05.900 16.553 - 16.655: 99.7828% ( 1) 00:09:05.900 18.609 - 18.712: 99.7995% ( 1) 00:09:05.900 21.590 - 21.693: 99.8162% ( 1) 00:09:05.900 22.721 - 22.824: 99.8329% ( 1) 00:09:05.900 23.338 - 23.441: 99.8496% ( 1) 00:09:05.900 23.852 - 23.955: 99.8663% ( 1) 00:09:05.900 24.366 - 24.469: 99.8830% ( 1) 00:09:05.900 31.255 - 31.460: 99.9164% ( 2) 00:09:05.900 31.460 - 31.666: 99.9332% ( 1) 00:09:05.900 32.694 - 32.900: 99.9499% ( 1) 00:09:05.900 37.629 - 37.835: 99.9666% ( 1) 00:09:05.900 39.685 - 39.891: 99.9833% ( 1) 00:09:05.900 55.107 - 55.518: 100.0000% ( 1) 00:09:05.900 00:09:05.900 00:09:05.900 real 0m1.312s 00:09:05.900 user 0m1.112s 00:09:05.900 sys 0m0.154s 00:09:05.900 13:06:57 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.900 13:06:57 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:05.900 ************************************ 00:09:05.900 END TEST nvme_overhead 00:09:05.900 ************************************ 00:09:05.900 13:06:57 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:05.900 13:06:57 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:05.900 13:06:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.900 13:06:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.900 ************************************ 00:09:05.900 START TEST nvme_arbitration 00:09:05.900 ************************************ 00:09:05.900 13:06:57 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:10.092 Initializing NVMe Controllers 00:09:10.092 Attached to 0000:00:10.0 00:09:10.092 Attached to 0000:00:11.0 00:09:10.092 Attached to 0000:00:13.0 00:09:10.092 Attached to 0000:00:12.0 00:09:10.092 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:10.092 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:10.092 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:10.092 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:10.092 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:10.092 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:10.092 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:10.092 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:10.092 Initialization complete. Launching workers. 00:09:10.092 Starting thread on core 1 with urgent priority queue 00:09:10.092 Starting thread on core 2 with urgent priority queue 00:09:10.092 Starting thread on core 3 with urgent priority queue 00:09:10.092 Starting thread on core 0 with urgent priority queue 00:09:10.092 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:10.092 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:10.092 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:09:10.092 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:09:10.092 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:09:10.092 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:09:10.092 ======================================================== 00:09:10.092 00:09:10.092 00:09:10.092 real 0m3.452s 00:09:10.092 user 0m9.433s 00:09:10.092 sys 0m0.178s 00:09:10.092 13:07:00 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.092 13:07:00 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 ************************************ 00:09:10.092 END TEST nvme_arbitration 00:09:10.092 ************************************ 00:09:10.092 13:07:00 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:10.092 13:07:00 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.092 13:07:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.092 13:07:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 ************************************ 00:09:10.092 START TEST nvme_single_aen 00:09:10.092 ************************************ 00:09:10.092 13:07:00 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:10.092 Asynchronous Event Request test 00:09:10.092 Attached to 0000:00:10.0 00:09:10.092 Attached to 0000:00:11.0 00:09:10.092 Attached to 0000:00:13.0 00:09:10.092 Attached to 0000:00:12.0 00:09:10.092 Reset controller to setup AER completions for this process 00:09:10.092 Registering asynchronous event callbacks... 00:09:10.092 Getting orig temperature thresholds of all controllers 00:09:10.092 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.092 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.092 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.092 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.092 Setting all controllers temperature threshold low to trigger AER 00:09:10.092 Waiting for all controllers temperature threshold to be set lower 00:09:10.092 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.092 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:10.092 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.092 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:10.092 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.092 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:10.092 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.092 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:10.092 Waiting for all controllers to trigger AER and reset threshold 00:09:10.092 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.092 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.092 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.092 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.092 Cleaning up... 00:09:10.092 00:09:10.092 real 0m0.312s 00:09:10.092 user 0m0.109s 00:09:10.092 sys 0m0.153s 00:09:10.092 13:07:01 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.092 ************************************ 00:09:10.092 END TEST nvme_single_aen 00:09:10.092 13:07:01 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 ************************************ 00:09:10.092 13:07:01 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:10.092 13:07:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.092 13:07:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.092 13:07:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.092 ************************************ 00:09:10.092 START TEST nvme_doorbell_aers 00:09:10.092 ************************************ 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:10.092 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:10.093 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:10.093 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:10.093 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:10.093 13:07:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:10.093 13:07:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:10.093 13:07:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:10.352 [2024-12-11 13:07:01.801741] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:20.331 Executing: test_write_invalid_db 00:09:20.331 Waiting for AER completion... 00:09:20.331 Failure: test_write_invalid_db 00:09:20.331 00:09:20.331 Executing: test_invalid_db_write_overflow_sq 00:09:20.331 Waiting for AER completion... 00:09:20.331 Failure: test_invalid_db_write_overflow_sq 00:09:20.331 00:09:20.331 Executing: test_invalid_db_write_overflow_cq 00:09:20.331 Waiting for AER completion... 00:09:20.331 Failure: test_invalid_db_write_overflow_cq 00:09:20.331 00:09:20.331 13:07:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:20.331 13:07:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:20.331 [2024-12-11 13:07:11.854862] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:30.310 Executing: test_write_invalid_db 00:09:30.310 Waiting for AER completion... 00:09:30.310 Failure: test_write_invalid_db 00:09:30.310 00:09:30.310 Executing: test_invalid_db_write_overflow_sq 00:09:30.310 Waiting for AER completion... 00:09:30.310 Failure: test_invalid_db_write_overflow_sq 00:09:30.310 00:09:30.310 Executing: test_invalid_db_write_overflow_cq 00:09:30.310 Waiting for AER completion... 00:09:30.310 Failure: test_invalid_db_write_overflow_cq 00:09:30.310 00:09:30.310 13:07:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:30.310 13:07:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:30.569 [2024-12-11 13:07:21.925653] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:40.549 Executing: test_write_invalid_db 00:09:40.549 Waiting for AER completion... 00:09:40.549 Failure: test_write_invalid_db 00:09:40.549 00:09:40.549 Executing: test_invalid_db_write_overflow_sq 00:09:40.549 Waiting for AER completion... 00:09:40.549 Failure: test_invalid_db_write_overflow_sq 00:09:40.549 00:09:40.549 Executing: test_invalid_db_write_overflow_cq 00:09:40.549 Waiting for AER completion... 00:09:40.549 Failure: test_invalid_db_write_overflow_cq 00:09:40.549 00:09:40.549 13:07:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:40.549 13:07:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:40.549 [2024-12-11 13:07:31.971857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 Executing: test_write_invalid_db 00:09:50.559 Waiting for AER completion... 00:09:50.559 Failure: test_write_invalid_db 00:09:50.559 00:09:50.559 Executing: test_invalid_db_write_overflow_sq 00:09:50.559 Waiting for AER completion... 00:09:50.559 Failure: test_invalid_db_write_overflow_sq 00:09:50.559 00:09:50.559 Executing: test_invalid_db_write_overflow_cq 00:09:50.559 Waiting for AER completion... 00:09:50.559 Failure: test_invalid_db_write_overflow_cq 00:09:50.559 00:09:50.559 00:09:50.559 real 0m40.349s 00:09:50.559 user 0m28.363s 00:09:50.559 sys 0m11.625s 00:09:50.559 13:07:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.559 13:07:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:50.559 ************************************ 00:09:50.559 END TEST nvme_doorbell_aers 00:09:50.559 ************************************ 00:09:50.559 13:07:41 nvme -- nvme/nvme.sh@97 -- # uname 00:09:50.559 13:07:41 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:50.559 13:07:41 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:50.559 13:07:41 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:50.559 13:07:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.559 13:07:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.559 ************************************ 00:09:50.559 START TEST nvme_multi_aen 00:09:50.559 ************************************ 00:09:50.559 13:07:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:50.559 [2024-12-11 13:07:42.059040] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.059154] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.059172] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.060880] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.061084] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.061104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.062371] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.062403] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.062417] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.064063] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.064237] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 [2024-12-11 13:07:42.064257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65844) is not found. Dropping the request. 00:09:50.559 Child process pid: 66365 00:09:50.818 [Child] Asynchronous Event Request test 00:09:50.818 [Child] Attached to 0000:00:10.0 00:09:50.818 [Child] Attached to 0000:00:11.0 00:09:50.818 [Child] Attached to 0000:00:13.0 00:09:50.818 [Child] Attached to 0000:00:12.0 00:09:50.818 [Child] Registering asynchronous event callbacks... 00:09:50.818 [Child] Getting orig temperature thresholds of all controllers 00:09:50.818 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.818 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.818 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.818 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.818 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:50.818 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.818 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.818 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.818 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.818 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.818 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.818 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.818 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.818 [Child] Cleaning up... 00:09:51.078 Asynchronous Event Request test 00:09:51.078 Attached to 0000:00:10.0 00:09:51.078 Attached to 0000:00:11.0 00:09:51.078 Attached to 0000:00:13.0 00:09:51.078 Attached to 0000:00:12.0 00:09:51.078 Reset controller to setup AER completions for this process 00:09:51.078 Registering asynchronous event callbacks... 00:09:51.078 Getting orig temperature thresholds of all controllers 00:09:51.078 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.078 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.078 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.078 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.078 Setting all controllers temperature threshold low to trigger AER 00:09:51.078 Waiting for all controllers temperature threshold to be set lower 00:09:51.078 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.078 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:51.078 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.078 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:51.078 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.078 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:51.078 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.078 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:51.078 Waiting for all controllers to trigger AER and reset threshold 00:09:51.078 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.078 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.078 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.078 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.078 Cleaning up... 00:09:51.078 00:09:51.078 real 0m0.640s 00:09:51.078 user 0m0.203s 00:09:51.078 sys 0m0.328s 00:09:51.079 13:07:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.079 13:07:42 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:51.079 ************************************ 00:09:51.079 END TEST nvme_multi_aen 00:09:51.079 ************************************ 00:09:51.079 13:07:42 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:51.079 13:07:42 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.079 13:07:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.079 13:07:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.079 ************************************ 00:09:51.079 START TEST nvme_startup 00:09:51.079 ************************************ 00:09:51.079 13:07:42 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:51.337 Initializing NVMe Controllers 00:09:51.337 Attached to 0000:00:10.0 00:09:51.337 Attached to 0000:00:11.0 00:09:51.337 Attached to 0000:00:13.0 00:09:51.337 Attached to 0000:00:12.0 00:09:51.337 Initialization complete. 00:09:51.337 Time used:182881.891 (us). 00:09:51.337 ************************************ 00:09:51.337 END TEST nvme_startup 00:09:51.337 ************************************ 00:09:51.337 00:09:51.337 real 0m0.289s 00:09:51.337 user 0m0.101s 00:09:51.337 sys 0m0.144s 00:09:51.337 13:07:42 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.337 13:07:42 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:51.337 13:07:42 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:51.337 13:07:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.337 13:07:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.337 13:07:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.337 ************************************ 00:09:51.337 START TEST nvme_multi_secondary 00:09:51.337 ************************************ 00:09:51.337 13:07:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:51.337 13:07:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=66421 00:09:51.337 13:07:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:51.337 13:07:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=66422 00:09:51.337 13:07:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:51.337 13:07:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:54.662 Initializing NVMe Controllers 00:09:54.662 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:54.662 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.662 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.662 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.662 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:54.662 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:54.662 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:54.662 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:54.662 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:54.662 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:54.662 Initialization complete. Launching workers. 00:09:54.662 ======================================================== 00:09:54.662 Latency(us) 00:09:54.662 Device Information : IOPS MiB/s Average min max 00:09:54.662 PCIE (0000:00:10.0) NSID 1 from core 1: 5442.81 21.26 2937.44 1003.21 6656.91 00:09:54.662 PCIE (0000:00:11.0) NSID 1 from core 1: 5442.81 21.26 2939.16 1036.61 5799.84 00:09:54.662 PCIE (0000:00:13.0) NSID 1 from core 1: 5442.81 21.26 2939.24 1046.40 6075.46 00:09:54.662 PCIE (0000:00:12.0) NSID 1 from core 1: 5442.81 21.26 2939.44 1047.33 6003.98 00:09:54.662 PCIE (0000:00:12.0) NSID 2 from core 1: 5442.81 21.26 2939.63 1024.42 6671.18 00:09:54.662 PCIE (0000:00:12.0) NSID 3 from core 1: 5448.14 21.28 2936.93 1046.52 6475.74 00:09:54.662 ======================================================== 00:09:54.662 Total : 32662.17 127.59 2938.64 1003.21 6671.18 00:09:54.662 00:09:54.928 Initializing NVMe Controllers 00:09:54.928 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:54.928 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.928 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.928 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.928 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:54.928 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:54.928 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:54.928 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:54.928 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:54.928 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:54.928 Initialization complete. Launching workers. 00:09:54.928 ======================================================== 00:09:54.928 Latency(us) 00:09:54.928 Device Information : IOPS MiB/s Average min max 00:09:54.928 PCIE (0000:00:10.0) NSID 1 from core 2: 3290.36 12.85 4860.93 1321.96 11523.09 00:09:54.928 PCIE (0000:00:11.0) NSID 1 from core 2: 3290.36 12.85 4862.34 1220.09 11070.58 00:09:54.928 PCIE (0000:00:13.0) NSID 1 from core 2: 3290.36 12.85 4862.18 1211.83 11076.97 00:09:54.928 PCIE (0000:00:12.0) NSID 1 from core 2: 3290.36 12.85 4862.08 1356.45 11092.12 00:09:54.928 PCIE (0000:00:12.0) NSID 2 from core 2: 3290.36 12.85 4862.00 1225.52 10709.18 00:09:54.928 PCIE (0000:00:12.0) NSID 3 from core 2: 3290.36 12.85 4861.85 1196.90 10916.61 00:09:54.928 ======================================================== 00:09:54.928 Total : 19742.17 77.12 4861.90 1196.90 11523.09 00:09:54.928 00:09:54.928 13:07:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 66421 00:09:57.460 Initializing NVMe Controllers 00:09:57.460 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:57.460 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:57.460 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:57.460 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:57.460 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:57.460 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:57.460 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:57.460 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:57.460 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:57.460 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:57.460 Initialization complete. Launching workers. 00:09:57.460 ======================================================== 00:09:57.460 Latency(us) 00:09:57.460 Device Information : IOPS MiB/s Average min max 00:09:57.460 PCIE (0000:00:10.0) NSID 1 from core 0: 8963.13 35.01 1783.59 898.47 5701.57 00:09:57.460 PCIE (0000:00:11.0) NSID 1 from core 0: 8963.13 35.01 1784.62 951.08 5577.84 00:09:57.460 PCIE (0000:00:13.0) NSID 1 from core 0: 8963.13 35.01 1784.58 889.60 5625.78 00:09:57.460 PCIE (0000:00:12.0) NSID 1 from core 0: 8963.13 35.01 1784.53 829.20 5634.55 00:09:57.460 PCIE (0000:00:12.0) NSID 2 from core 0: 8963.13 35.01 1784.47 788.70 5661.81 00:09:57.460 PCIE (0000:00:12.0) NSID 3 from core 0: 8963.13 35.01 1784.42 735.28 5594.98 00:09:57.460 ======================================================== 00:09:57.460 Total : 53778.76 210.07 1784.37 735.28 5701.57 00:09:57.460 00:09:57.460 13:07:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 66422 00:09:57.460 13:07:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66491 00:09:57.460 13:07:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:57.460 13:07:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66492 00:09:57.460 13:07:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:57.460 13:07:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:00.744 Initializing NVMe Controllers 00:10:00.744 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.744 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.744 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.744 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.744 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:00.744 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:00.744 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:00.744 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:00.744 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:00.744 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:00.744 Initialization complete. Launching workers. 00:10:00.744 ======================================================== 00:10:00.744 Latency(us) 00:10:00.744 Device Information : IOPS MiB/s Average min max 00:10:00.744 PCIE (0000:00:10.0) NSID 1 from core 1: 5593.70 21.85 2858.17 929.73 7270.37 00:10:00.744 PCIE (0000:00:11.0) NSID 1 from core 1: 5593.70 21.85 2859.89 961.43 7270.84 00:10:00.744 PCIE (0000:00:13.0) NSID 1 from core 1: 5593.70 21.85 2860.32 958.60 7289.81 00:10:00.744 PCIE (0000:00:12.0) NSID 1 from core 1: 5593.70 21.85 2860.58 955.15 7100.88 00:10:00.744 PCIE (0000:00:12.0) NSID 2 from core 1: 5593.70 21.85 2860.67 959.64 7566.51 00:10:00.744 PCIE (0000:00:12.0) NSID 3 from core 1: 5599.03 21.87 2858.10 961.61 7653.01 00:10:00.744 ======================================================== 00:10:00.744 Total : 33567.51 131.12 2859.62 929.73 7653.01 00:10:00.744 00:10:00.744 Initializing NVMe Controllers 00:10:00.744 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.744 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.744 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.744 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.744 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:00.744 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:00.744 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:00.744 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:00.744 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:00.744 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:00.744 Initialization complete. Launching workers. 00:10:00.744 ======================================================== 00:10:00.744 Latency(us) 00:10:00.744 Device Information : IOPS MiB/s Average min max 00:10:00.744 PCIE (0000:00:10.0) NSID 1 from core 0: 5370.15 20.98 2977.09 1031.75 7369.89 00:10:00.744 PCIE (0000:00:11.0) NSID 1 from core 0: 5370.15 20.98 2978.92 1051.71 7089.07 00:10:00.744 PCIE (0000:00:13.0) NSID 1 from core 0: 5370.15 20.98 2978.87 1047.46 7052.55 00:10:00.744 PCIE (0000:00:12.0) NSID 1 from core 0: 5370.15 20.98 2978.83 1053.17 7669.78 00:10:00.744 PCIE (0000:00:12.0) NSID 2 from core 0: 5370.15 20.98 2978.80 1059.28 7587.58 00:10:00.744 PCIE (0000:00:12.0) NSID 3 from core 0: 5370.15 20.98 2978.76 1050.67 7295.13 00:10:00.744 ======================================================== 00:10:00.744 Total : 32220.92 125.86 2978.55 1031.75 7669.78 00:10:00.744 00:10:02.648 Initializing NVMe Controllers 00:10:02.648 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:02.648 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:02.648 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:02.648 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:02.648 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:02.648 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:02.648 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:02.648 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:02.648 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:02.648 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:02.648 Initialization complete. Launching workers. 00:10:02.648 ======================================================== 00:10:02.648 Latency(us) 00:10:02.648 Device Information : IOPS MiB/s Average min max 00:10:02.648 PCIE (0000:00:10.0) NSID 1 from core 2: 3123.19 12.20 5120.72 1143.03 11900.06 00:10:02.648 PCIE (0000:00:11.0) NSID 1 from core 2: 3123.19 12.20 5122.45 1120.14 12389.12 00:10:02.648 PCIE (0000:00:13.0) NSID 1 from core 2: 3123.19 12.20 5122.33 1159.37 12649.68 00:10:02.648 PCIE (0000:00:12.0) NSID 1 from core 2: 3123.19 12.20 5122.21 1140.90 12501.91 00:10:02.648 PCIE (0000:00:12.0) NSID 2 from core 2: 3123.19 12.20 5122.36 1143.36 12624.34 00:10:02.648 PCIE (0000:00:12.0) NSID 3 from core 2: 3123.19 12.20 5122.24 1165.19 12820.14 00:10:02.648 ======================================================== 00:10:02.648 Total : 18739.13 73.20 5122.05 1120.14 12820.14 00:10:02.648 00:10:02.908 ************************************ 00:10:02.908 END TEST nvme_multi_secondary 00:10:02.908 ************************************ 00:10:02.908 13:07:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66491 00:10:02.908 13:07:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66492 00:10:02.908 00:10:02.908 real 0m11.354s 00:10:02.908 user 0m18.578s 00:10:02.908 sys 0m1.165s 00:10:02.908 13:07:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.908 13:07:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:02.908 13:07:54 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:02.908 13:07:54 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:02.908 13:07:54 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/65429 ]] 00:10:02.908 13:07:54 nvme -- common/autotest_common.sh@1094 -- # kill 65429 00:10:02.908 13:07:54 nvme -- common/autotest_common.sh@1095 -- # wait 65429 00:10:02.908 [2024-12-11 13:07:54.316276] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.316638] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.316718] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.316763] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.322860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.322956] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.322996] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.323039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.327886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.327951] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.327979] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.328007] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.332504] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.332570] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.332597] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:02.908 [2024-12-11 13:07:54.332626] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66364) is not found. Dropping the request. 00:10:03.167 13:07:54 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:03.167 13:07:54 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:03.167 13:07:54 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:03.167 13:07:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.167 13:07:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.167 13:07:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.167 ************************************ 00:10:03.167 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:03.167 ************************************ 00:10:03.167 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:03.167 * Looking for test storage... 00:10:03.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:03.167 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.167 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.167 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.427 --rc genhtml_branch_coverage=1 00:10:03.427 --rc genhtml_function_coverage=1 00:10:03.427 --rc genhtml_legend=1 00:10:03.427 --rc geninfo_all_blocks=1 00:10:03.427 --rc geninfo_unexecuted_blocks=1 00:10:03.427 00:10:03.427 ' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.427 --rc genhtml_branch_coverage=1 00:10:03.427 --rc genhtml_function_coverage=1 00:10:03.427 --rc genhtml_legend=1 00:10:03.427 --rc geninfo_all_blocks=1 00:10:03.427 --rc geninfo_unexecuted_blocks=1 00:10:03.427 00:10:03.427 ' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.427 --rc genhtml_branch_coverage=1 00:10:03.427 --rc genhtml_function_coverage=1 00:10:03.427 --rc genhtml_legend=1 00:10:03.427 --rc geninfo_all_blocks=1 00:10:03.427 --rc geninfo_unexecuted_blocks=1 00:10:03.427 00:10:03.427 ' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.427 --rc genhtml_branch_coverage=1 00:10:03.427 --rc genhtml_function_coverage=1 00:10:03.427 --rc genhtml_legend=1 00:10:03.427 --rc geninfo_all_blocks=1 00:10:03.427 --rc geninfo_unexecuted_blocks=1 00:10:03.427 00:10:03.427 ' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:03.427 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66664 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66664 00:10:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66664 ']' 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.428 13:07:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:03.687 [2024-12-11 13:07:55.049905] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:10:03.687 [2024-12-11 13:07:55.050041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66664 ] 00:10:03.946 [2024-12-11 13:07:55.255686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.946 [2024-12-11 13:07:55.393748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.946 [2024-12-11 13:07:55.393919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.946 [2024-12-11 13:07:55.394087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.946 [2024-12-11 13:07:55.394101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:05.326 nvme0n1 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_PmLVU.txt 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:05.326 true 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733922476 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66687 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:05.326 13:07:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 [2024-12-11 13:07:58.572599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:07.257 [2024-12-11 13:07:58.572933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:07.257 [2024-12-11 13:07:58.572965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:07.257 [2024-12-11 13:07:58.572985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.257 [2024-12-11 13:07:58.575174] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:07.257 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66687 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66687 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66687 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_PmLVU.txt 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_PmLVU.txt 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66664 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66664 ']' 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66664 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66664 00:10:07.257 killing process with pid 66664 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66664' 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66664 00:10:07.257 13:07:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66664 00:10:10.556 13:08:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:10.556 13:08:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:10.556 ************************************ 00:10:10.556 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:10.556 ************************************ 00:10:10.556 00:10:10.556 real 0m6.923s 00:10:10.556 user 0m23.855s 00:10:10.556 sys 0m1.079s 00:10:10.556 13:08:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.556 13:08:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:10.556 13:08:01 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:10.556 13:08:01 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:10.556 13:08:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.556 13:08:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.556 13:08:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.556 ************************************ 00:10:10.556 START TEST nvme_fio 00:10:10.556 ************************************ 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:10.556 13:08:01 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:10.556 13:08:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.815 13:08:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.815 13:08:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.815 13:08:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:11.074 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:11.074 fio-3.35 00:10:11.074 Starting 1 thread 00:10:14.359 00:10:14.359 test: (groupid=0, jobs=1): err= 0: pid=66843: Wed Dec 11 13:08:05 2024 00:10:14.359 read: IOPS=21.2k, BW=82.7MiB/s (86.7MB/s)(165MiB/2001msec) 00:10:14.359 slat (usec): min=4, max=107, avg= 5.09, stdev= 1.46 00:10:14.359 clat (usec): min=202, max=10645, avg=3009.30, stdev=373.40 00:10:14.359 lat (usec): min=207, max=10752, avg=3014.39, stdev=373.95 00:10:14.359 clat percentiles (usec): 00:10:14.359 | 1.00th=[ 2737], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:14.359 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:10:14.359 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3097], 95.00th=[ 3195], 00:10:14.359 | 99.00th=[ 4080], 99.50th=[ 5276], 99.90th=[ 8586], 99.95th=[ 9372], 00:10:14.359 | 99.99th=[10421] 00:10:14.359 bw ( KiB/s): min=82528, max=86208, per=100.00%, avg=84840.00, stdev=2013.44, samples=3 00:10:14.359 iops : min=20632, max=21552, avg=21210.00, stdev=503.36, samples=3 00:10:14.359 write: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(164MiB/2001msec); 0 zone resets 00:10:14.359 slat (nsec): min=4323, max=65929, avg=5301.83, stdev=1386.77 00:10:14.359 clat (usec): min=280, max=14773, avg=3027.22, stdev=471.66 00:10:14.359 lat (usec): min=285, max=14778, avg=3032.52, stdev=472.04 00:10:14.359 clat percentiles (usec): 00:10:14.359 | 1.00th=[ 2769], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:14.359 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:10:14.359 | 70.00th=[ 3032], 80.00th=[ 3032], 90.00th=[ 3097], 95.00th=[ 3195], 00:10:14.359 | 99.00th=[ 4293], 99.50th=[ 5800], 99.90th=[10290], 99.95th=[11338], 00:10:14.359 | 99.99th=[11731] 00:10:14.359 bw ( KiB/s): min=82400, max=86248, per=100.00%, avg=84960.00, stdev=2217.04, samples=3 00:10:14.359 iops : min=20600, max=21562, avg=21240.00, stdev=554.26, samples=3 00:10:14.359 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:14.359 lat (msec) : 2=0.12%, 4=98.64%, 10=1.13%, 20=0.07% 00:10:14.359 cpu : usr=99.40%, sys=0.10%, ctx=13, majf=0, minf=608 00:10:14.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:14.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.359 issued rwts: total=42364,42088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.359 00:10:14.359 Run status group 0 (all jobs): 00:10:14.359 READ: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=165MiB (174MB), run=2001-2001msec 00:10:14.359 WRITE: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=164MiB (172MB), run=2001-2001msec 00:10:14.619 ----------------------------------------------------- 00:10:14.619 Suppressions used: 00:10:14.619 count bytes template 00:10:14.619 1 32 /usr/src/fio/parse.c 00:10:14.619 1 8 libtcmalloc_minimal.so 00:10:14.619 ----------------------------------------------------- 00:10:14.619 00:10:14.619 13:08:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:14.619 13:08:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:14.619 13:08:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:14.619 13:08:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:14.879 13:08:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:14.879 13:08:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:15.138 13:08:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:15.138 13:08:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:15.138 13:08:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:15.397 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:15.397 fio-3.35 00:10:15.397 Starting 1 thread 00:10:19.589 00:10:19.589 test: (groupid=0, jobs=1): err= 0: pid=66909: Wed Dec 11 13:08:10 2024 00:10:19.589 read: IOPS=20.8k, BW=81.4MiB/s (85.3MB/s)(163MiB/2001msec) 00:10:19.589 slat (nsec): min=4270, max=76262, avg=5293.21, stdev=2311.90 00:10:19.589 clat (usec): min=290, max=11222, avg=3057.92, stdev=416.93 00:10:19.589 lat (usec): min=294, max=11299, avg=3063.22, stdev=417.49 00:10:19.589 clat percentiles (usec): 00:10:19.589 | 1.00th=[ 2737], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:10:19.589 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:19.589 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3228], 00:10:19.589 | 99.00th=[ 4490], 99.50th=[ 6194], 99.90th=[ 8586], 99.95th=[ 8848], 00:10:19.589 | 99.99th=[10945] 00:10:19.589 bw ( KiB/s): min=81176, max=84272, per=99.07%, avg=82570.67, stdev=1570.62, samples=3 00:10:19.589 iops : min=20294, max=21068, avg=20642.67, stdev=392.65, samples=3 00:10:19.589 write: IOPS=20.8k, BW=81.1MiB/s (85.0MB/s)(162MiB/2001msec); 0 zone resets 00:10:19.589 slat (nsec): min=4387, max=50040, avg=5485.73, stdev=2344.66 00:10:19.589 clat (usec): min=192, max=11081, avg=3068.52, stdev=420.74 00:10:19.589 lat (usec): min=198, max=11104, avg=3074.01, stdev=421.27 00:10:19.589 clat percentiles (usec): 00:10:19.589 | 1.00th=[ 2737], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:10:19.589 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:19.589 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3261], 00:10:19.589 | 99.00th=[ 4424], 99.50th=[ 6194], 99.90th=[ 8586], 99.95th=[ 8979], 00:10:19.589 | 99.99th=[10552] 00:10:19.589 bw ( KiB/s): min=81576, max=84312, per=99.53%, avg=82632.00, stdev=1470.87, samples=3 00:10:19.589 iops : min=20394, max=21080, avg=20658.67, stdev=368.86, samples=3 00:10:19.589 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:19.589 lat (msec) : 2=0.05%, 4=98.71%, 10=1.18%, 20=0.03% 00:10:19.589 cpu : usr=99.35%, sys=0.00%, ctx=4, majf=0, minf=609 00:10:19.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:19.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.589 issued rwts: total=41694,41531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.589 00:10:19.589 Run status group 0 (all jobs): 00:10:19.589 READ: bw=81.4MiB/s (85.3MB/s), 81.4MiB/s-81.4MiB/s (85.3MB/s-85.3MB/s), io=163MiB (171MB), run=2001-2001msec 00:10:19.589 WRITE: bw=81.1MiB/s (85.0MB/s), 81.1MiB/s-81.1MiB/s (85.0MB/s-85.0MB/s), io=162MiB (170MB), run=2001-2001msec 00:10:19.589 ----------------------------------------------------- 00:10:19.589 Suppressions used: 00:10:19.589 count bytes template 00:10:19.589 1 32 /usr/src/fio/parse.c 00:10:19.589 1 8 libtcmalloc_minimal.so 00:10:19.589 ----------------------------------------------------- 00:10:19.589 00:10:19.589 13:08:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:19.589 13:08:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:19.589 13:08:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:19.589 13:08:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:19.589 13:08:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:19.589 13:08:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:19.589 13:08:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:19.589 13:08:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:19.589 13:08:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:19.849 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:19.849 fio-3.35 00:10:19.849 Starting 1 thread 00:10:24.040 00:10:24.040 test: (groupid=0, jobs=1): err= 0: pid=66975: Wed Dec 11 13:08:15 2024 00:10:24.040 read: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(165MiB/2001msec) 00:10:24.040 slat (nsec): min=4224, max=67127, avg=5284.66, stdev=2310.05 00:10:24.040 clat (usec): min=214, max=10468, avg=3030.68, stdev=272.32 00:10:24.040 lat (usec): min=219, max=10535, avg=3035.96, stdev=272.67 00:10:24.040 clat percentiles (usec): 00:10:24.040 | 1.00th=[ 2606], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2933], 00:10:24.040 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:24.040 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3228], 00:10:24.040 | 99.00th=[ 3589], 99.50th=[ 4228], 99.90th=[ 5800], 99.95th=[ 7898], 00:10:24.040 | 99.99th=[10028] 00:10:24.040 bw ( KiB/s): min=82328, max=84696, per=99.37%, avg=83650.67, stdev=1208.11, samples=3 00:10:24.040 iops : min=20582, max=21174, avg=20912.67, stdev=302.03, samples=3 00:10:24.040 write: IOPS=20.9k, BW=81.7MiB/s (85.7MB/s)(164MiB/2001msec); 0 zone resets 00:10:24.040 slat (usec): min=4, max=109, avg= 5.45, stdev= 2.38 00:10:24.040 clat (usec): min=205, max=10203, avg=3039.67, stdev=271.81 00:10:24.040 lat (usec): min=211, max=10226, avg=3045.12, stdev=272.17 00:10:24.040 clat percentiles (usec): 00:10:24.040 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:10:24.040 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:24.040 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3261], 00:10:24.040 | 99.00th=[ 3687], 99.50th=[ 4293], 99.90th=[ 6194], 99.95th=[ 8225], 00:10:24.040 | 99.99th=[ 9765] 00:10:24.040 bw ( KiB/s): min=82144, max=84888, per=100.00%, avg=83722.67, stdev=1417.93, samples=3 00:10:24.040 iops : min=20536, max=21222, avg=20930.67, stdev=354.48, samples=3 00:10:24.040 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:24.040 lat (msec) : 2=0.35%, 4=98.99%, 10=0.61%, 20=0.01% 00:10:24.040 cpu : usr=99.35%, sys=0.05%, ctx=3, majf=0, minf=608 00:10:24.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:24.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.040 issued rwts: total=42113,41868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.040 00:10:24.040 Run status group 0 (all jobs): 00:10:24.040 READ: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=165MiB (172MB), run=2001-2001msec 00:10:24.040 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=164MiB (171MB), run=2001-2001msec 00:10:24.040 ----------------------------------------------------- 00:10:24.040 Suppressions used: 00:10:24.040 count bytes template 00:10:24.040 1 32 /usr/src/fio/parse.c 00:10:24.040 1 8 libtcmalloc_minimal.so 00:10:24.040 ----------------------------------------------------- 00:10:24.040 00:10:24.040 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:24.040 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:24.040 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:24.040 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:24.040 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:24.040 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:24.300 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:24.300 13:08:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:24.300 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:24.559 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:24.559 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:24.559 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:24.559 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:24.559 13:08:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:24.559 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:24.559 fio-3.35 00:10:24.559 Starting 1 thread 00:10:29.863 00:10:29.863 test: (groupid=0, jobs=1): err= 0: pid=67036: Wed Dec 11 13:08:20 2024 00:10:29.863 read: IOPS=20.5k, BW=80.2MiB/s (84.1MB/s)(160MiB/2001msec) 00:10:29.863 slat (nsec): min=4226, max=76362, avg=5388.69, stdev=2440.04 00:10:29.863 clat (usec): min=211, max=11819, avg=3104.58, stdev=612.40 00:10:29.863 lat (usec): min=216, max=11896, avg=3109.97, stdev=613.16 00:10:29.863 clat percentiles (usec): 00:10:29.863 | 1.00th=[ 2671], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:29.863 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:29.863 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3392], 00:10:29.863 | 99.00th=[ 6980], 99.50th=[ 8094], 99.90th=[ 8586], 99.95th=[ 9765], 00:10:29.863 | 99.99th=[11600] 00:10:29.863 bw ( KiB/s): min=76160, max=84152, per=98.48%, avg=80842.67, stdev=4169.24, samples=3 00:10:29.863 iops : min=19040, max=21038, avg=20210.67, stdev=1042.31, samples=3 00:10:29.863 write: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec); 0 zone resets 00:10:29.863 slat (usec): min=4, max=117, avg= 5.56, stdev= 2.52 00:10:29.863 clat (usec): min=235, max=11696, avg=3110.87, stdev=607.59 00:10:29.863 lat (usec): min=240, max=11718, avg=3116.44, stdev=608.33 00:10:29.863 clat percentiles (usec): 00:10:29.863 | 1.00th=[ 2671], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:10:29.863 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:10:29.863 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3392], 00:10:29.863 | 99.00th=[ 6783], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[10028], 00:10:29.863 | 99.99th=[11338] 00:10:29.863 bw ( KiB/s): min=76112, max=84432, per=98.88%, avg=80970.67, stdev=4332.44, samples=3 00:10:29.863 iops : min=19028, max=21108, avg=20242.67, stdev=1083.11, samples=3 00:10:29.863 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:29.863 lat (msec) : 2=0.23%, 4=96.84%, 10=2.85%, 20=0.05% 00:10:29.863 cpu : usr=99.30%, sys=0.00%, ctx=7, majf=0, minf=607 00:10:29.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:29.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.863 issued rwts: total=41067,40966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.863 00:10:29.863 Run status group 0 (all jobs): 00:10:29.863 READ: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=160MiB (168MB), run=2001-2001msec 00:10:29.863 WRITE: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:10:29.863 ----------------------------------------------------- 00:10:29.863 Suppressions used: 00:10:29.863 count bytes template 00:10:29.863 1 32 /usr/src/fio/parse.c 00:10:29.863 1 8 libtcmalloc_minimal.so 00:10:29.863 ----------------------------------------------------- 00:10:29.863 00:10:29.863 13:08:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:29.863 13:08:21 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:29.863 00:10:29.863 real 0m19.538s 00:10:29.863 user 0m14.601s 00:10:29.863 sys 0m5.609s 00:10:29.863 13:08:21 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.863 13:08:21 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:29.863 ************************************ 00:10:29.863 END TEST nvme_fio 00:10:29.863 ************************************ 00:10:29.863 ************************************ 00:10:29.863 END TEST nvme 00:10:29.863 ************************************ 00:10:29.863 00:10:29.863 real 1m35.988s 00:10:29.863 user 3m44.474s 00:10:29.863 sys 0m26.342s 00:10:29.863 13:08:21 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.863 13:08:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.863 13:08:21 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:29.863 13:08:21 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:29.863 13:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.863 13:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.863 13:08:21 -- common/autotest_common.sh@10 -- # set +x 00:10:29.863 ************************************ 00:10:29.863 START TEST nvme_scc 00:10:29.863 ************************************ 00:10:29.863 13:08:21 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:29.863 * Looking for test storage... 00:10:29.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:29.863 13:08:21 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.863 13:08:21 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.863 13:08:21 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.123 13:08:21 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.123 13:08:21 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:30.123 13:08:21 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.123 13:08:21 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.123 --rc genhtml_branch_coverage=1 00:10:30.123 --rc genhtml_function_coverage=1 00:10:30.123 --rc genhtml_legend=1 00:10:30.123 --rc geninfo_all_blocks=1 00:10:30.123 --rc geninfo_unexecuted_blocks=1 00:10:30.123 00:10:30.123 ' 00:10:30.123 13:08:21 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.123 --rc genhtml_branch_coverage=1 00:10:30.123 --rc genhtml_function_coverage=1 00:10:30.123 --rc genhtml_legend=1 00:10:30.123 --rc geninfo_all_blocks=1 00:10:30.123 --rc geninfo_unexecuted_blocks=1 00:10:30.123 00:10:30.123 ' 00:10:30.123 13:08:21 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.123 --rc genhtml_branch_coverage=1 00:10:30.123 --rc genhtml_function_coverage=1 00:10:30.123 --rc genhtml_legend=1 00:10:30.123 --rc geninfo_all_blocks=1 00:10:30.123 --rc geninfo_unexecuted_blocks=1 00:10:30.123 00:10:30.123 ' 00:10:30.123 13:08:21 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.123 --rc genhtml_branch_coverage=1 00:10:30.123 --rc genhtml_function_coverage=1 00:10:30.123 --rc genhtml_legend=1 00:10:30.123 --rc geninfo_all_blocks=1 00:10:30.123 --rc geninfo_unexecuted_blocks=1 00:10:30.124 00:10:30.124 ' 00:10:30.124 13:08:21 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.124 13:08:21 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.124 13:08:21 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.124 13:08:21 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.124 13:08:21 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.124 13:08:21 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.124 13:08:21 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.124 13:08:21 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.124 13:08:21 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:30.124 13:08:21 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:30.124 13:08:21 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:30.124 13:08:21 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:30.124 13:08:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:30.124 13:08:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:30.124 13:08:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:30.124 13:08:21 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:30.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.952 Waiting for block devices as requested 00:10:30.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.952 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.211 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.211 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:36.490 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:36.490 13:08:27 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:36.490 13:08:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:36.490 13:08:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:36.490 13:08:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.490 13:08:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.490 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.491 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:36.492 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.493 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.494 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.495 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.496 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:36.761 13:08:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:36.761 13:08:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:36.761 13:08:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.761 13:08:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.761 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:36.762 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:36.763 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.764 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.765 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.766 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:36.767 13:08:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:36.767 13:08:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:36.767 13:08:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.767 13:08:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:36.767 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.768 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.769 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.770 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.771 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.772 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.773 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:36.774 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:37.038 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.039 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.040 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.041 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.042 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.043 13:08:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:37.044 13:08:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:37.044 13:08:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:37.044 13:08:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.044 13:08:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.044 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:37.045 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:37.046 13:08:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.046 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:37.047 13:08:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:37.047 13:08:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:37.047 13:08:28 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:37.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:38.553 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.553 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.553 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.553 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.812 13:08:30 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:38.812 13:08:30 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.812 13:08:30 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.812 13:08:30 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:38.812 ************************************ 00:10:38.812 START TEST nvme_simple_copy 00:10:38.812 ************************************ 00:10:38.812 13:08:30 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:39.072 Initializing NVMe Controllers 00:10:39.072 Attaching to 0000:00:10.0 00:10:39.072 Controller supports SCC. Attached to 0000:00:10.0 00:10:39.072 Namespace ID: 1 size: 6GB 00:10:39.072 Initialization complete. 00:10:39.072 00:10:39.072 Controller QEMU NVMe Ctrl (12340 ) 00:10:39.072 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:39.072 Namespace Block Size:4096 00:10:39.072 Writing LBAs 0 to 63 with Random Data 00:10:39.072 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:39.072 LBAs matching Written Data: 64 00:10:39.072 00:10:39.072 real 0m0.328s 00:10:39.072 user 0m0.122s 00:10:39.072 sys 0m0.104s 00:10:39.072 13:08:30 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.072 13:08:30 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:39.072 ************************************ 00:10:39.072 END TEST nvme_simple_copy 00:10:39.072 ************************************ 00:10:39.072 ************************************ 00:10:39.072 END TEST nvme_scc 00:10:39.072 ************************************ 00:10:39.072 00:10:39.072 real 0m9.377s 00:10:39.072 user 0m1.751s 00:10:39.072 sys 0m2.605s 00:10:39.072 13:08:30 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.072 13:08:30 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:39.331 13:08:30 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:39.331 13:08:30 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:39.331 13:08:30 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:39.331 13:08:30 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:39.331 13:08:30 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:39.331 13:08:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.331 13:08:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.331 13:08:30 -- common/autotest_common.sh@10 -- # set +x 00:10:39.331 ************************************ 00:10:39.331 START TEST nvme_fdp 00:10:39.331 ************************************ 00:10:39.332 13:08:30 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:39.332 * Looking for test storage... 00:10:39.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:39.332 13:08:30 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.332 13:08:30 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.332 13:08:30 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.332 13:08:30 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.332 13:08:30 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:39.592 13:08:30 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.592 13:08:30 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.592 --rc genhtml_branch_coverage=1 00:10:39.592 --rc genhtml_function_coverage=1 00:10:39.592 --rc genhtml_legend=1 00:10:39.592 --rc geninfo_all_blocks=1 00:10:39.592 --rc geninfo_unexecuted_blocks=1 00:10:39.592 00:10:39.592 ' 00:10:39.592 13:08:30 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.592 --rc genhtml_branch_coverage=1 00:10:39.592 --rc genhtml_function_coverage=1 00:10:39.592 --rc genhtml_legend=1 00:10:39.592 --rc geninfo_all_blocks=1 00:10:39.592 --rc geninfo_unexecuted_blocks=1 00:10:39.592 00:10:39.592 ' 00:10:39.592 13:08:30 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.592 --rc genhtml_branch_coverage=1 00:10:39.592 --rc genhtml_function_coverage=1 00:10:39.592 --rc genhtml_legend=1 00:10:39.592 --rc geninfo_all_blocks=1 00:10:39.592 --rc geninfo_unexecuted_blocks=1 00:10:39.592 00:10:39.592 ' 00:10:39.592 13:08:30 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.592 --rc genhtml_branch_coverage=1 00:10:39.592 --rc genhtml_function_coverage=1 00:10:39.592 --rc genhtml_legend=1 00:10:39.592 --rc geninfo_all_blocks=1 00:10:39.592 --rc geninfo_unexecuted_blocks=1 00:10:39.592 00:10:39.592 ' 00:10:39.592 13:08:30 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:39.592 13:08:30 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:39.592 13:08:30 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:39.592 13:08:30 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:39.592 13:08:30 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.592 13:08:30 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.592 13:08:30 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.592 13:08:30 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.592 13:08:30 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.592 13:08:30 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:39.593 13:08:30 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:39.593 13:08:30 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:39.593 13:08:30 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:39.593 13:08:30 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:40.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:40.421 Waiting for block devices as requested 00:10:40.421 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.421 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.680 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.680 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.963 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:45.963 13:08:37 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:45.963 13:08:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:45.963 13:08:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:45.963 13:08:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.963 13:08:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.963 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.964 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:45.965 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.966 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:45.967 13:08:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:45.967 13:08:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:45.967 13:08:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.967 13:08:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:45.967 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.968 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.969 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:45.970 13:08:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:45.970 13:08:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:45.970 13:08:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.970 13:08:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:45.970 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.971 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.235 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:46.236 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:46.237 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.238 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:46.239 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.240 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.241 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.242 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.243 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.244 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:46.245 13:08:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.246 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:46.247 13:08:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:46.247 13:08:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:46.247 13:08:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:46.247 13:08:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.247 13:08:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:46.508 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.509 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:46.510 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:46.511 13:08:37 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:46.511 13:08:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:46.512 13:08:37 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:46.512 13:08:37 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:46.512 13:08:37 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:47.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.690 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:47.949 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:47.949 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:47.949 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:48.208 13:08:39 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:48.208 13:08:39 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.208 13:08:39 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.208 13:08:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:48.208 ************************************ 00:10:48.208 START TEST nvme_flexible_data_placement 00:10:48.208 ************************************ 00:10:48.208 13:08:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:48.468 Initializing NVMe Controllers 00:10:48.468 Attaching to 0000:00:13.0 00:10:48.468 Controller supports FDP Attached to 0000:00:13.0 00:10:48.468 Namespace ID: 1 Endurance Group ID: 1 00:10:48.468 Initialization complete. 00:10:48.468 00:10:48.468 ================================== 00:10:48.468 == FDP tests for Namespace: #01 == 00:10:48.468 ================================== 00:10:48.468 00:10:48.468 Get Feature: FDP: 00:10:48.468 ================= 00:10:48.468 Enabled: Yes 00:10:48.468 FDP configuration Index: 0 00:10:48.468 00:10:48.468 FDP configurations log page 00:10:48.468 =========================== 00:10:48.468 Number of FDP configurations: 1 00:10:48.468 Version: 0 00:10:48.468 Size: 112 00:10:48.468 FDP Configuration Descriptor: 0 00:10:48.468 Descriptor Size: 96 00:10:48.468 Reclaim Group Identifier format: 2 00:10:48.468 FDP Volatile Write Cache: Not Present 00:10:48.468 FDP Configuration: Valid 00:10:48.468 Vendor Specific Size: 0 00:10:48.468 Number of Reclaim Groups: 2 00:10:48.468 Number of Recalim Unit Handles: 8 00:10:48.468 Max Placement Identifiers: 128 00:10:48.468 Number of Namespaces Suppprted: 256 00:10:48.468 Reclaim unit Nominal Size: 6000000 bytes 00:10:48.468 Estimated Reclaim Unit Time Limit: Not Reported 00:10:48.468 RUH Desc #000: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #001: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #002: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #003: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #004: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #005: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #006: RUH Type: Initially Isolated 00:10:48.468 RUH Desc #007: RUH Type: Initially Isolated 00:10:48.468 00:10:48.468 FDP reclaim unit handle usage log page 00:10:48.468 ====================================== 00:10:48.468 Number of Reclaim Unit Handles: 8 00:10:48.468 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:48.468 RUH Usage Desc #001: RUH Attributes: Unused 00:10:48.468 RUH Usage Desc #002: RUH Attributes: Unused 00:10:48.468 RUH Usage Desc #003: RUH Attributes: Unused 00:10:48.468 RUH Usage Desc #004: RUH Attributes: Unused 00:10:48.468 RUH Usage Desc #005: RUH Attributes: Unused 00:10:48.468 RUH Usage Desc #006: RUH Attributes: Unused 00:10:48.468 RUH Usage Desc #007: RUH Attributes: Unused 00:10:48.468 00:10:48.468 FDP statistics log page 00:10:48.468 ======================= 00:10:48.468 Host bytes with metadata written: 909512704 00:10:48.468 Media bytes with metadata written: 909611008 00:10:48.468 Media bytes erased: 0 00:10:48.468 00:10:48.468 FDP Reclaim unit handle status 00:10:48.468 ============================== 00:10:48.468 Number of RUHS descriptors: 2 00:10:48.468 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005c9f 00:10:48.468 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:48.468 00:10:48.468 FDP write on placement id: 0 success 00:10:48.468 00:10:48.468 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:48.468 00:10:48.468 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:48.468 00:10:48.468 Get Feature: FDP Events for Placement handle: #0 00:10:48.468 ======================== 00:10:48.468 Number of FDP Events: 6 00:10:48.468 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:48.468 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:48.468 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:48.468 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:48.468 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:48.468 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:48.468 00:10:48.468 FDP events log page 00:10:48.468 =================== 00:10:48.468 Number of FDP events: 1 00:10:48.468 FDP Event #0: 00:10:48.468 Event Type: RU Not Written to Capacity 00:10:48.468 Placement Identifier: Valid 00:10:48.468 NSID: Valid 00:10:48.468 Location: Valid 00:10:48.468 Placement Identifier: 0 00:10:48.468 Event Timestamp: 8 00:10:48.468 Namespace Identifier: 1 00:10:48.468 Reclaim Group Identifier: 0 00:10:48.468 Reclaim Unit Handle Identifier: 0 00:10:48.468 00:10:48.468 FDP test passed 00:10:48.468 00:10:48.468 real 0m0.294s 00:10:48.468 user 0m0.092s 00:10:48.468 sys 0m0.101s 00:10:48.468 ************************************ 00:10:48.468 END TEST nvme_flexible_data_placement 00:10:48.468 ************************************ 00:10:48.468 13:08:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.468 13:08:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:48.468 ************************************ 00:10:48.468 END TEST nvme_fdp 00:10:48.468 ************************************ 00:10:48.468 00:10:48.468 real 0m9.219s 00:10:48.468 user 0m1.636s 00:10:48.468 sys 0m2.613s 00:10:48.468 13:08:39 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.468 13:08:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:48.468 13:08:39 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:48.468 13:08:39 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:48.468 13:08:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.468 13:08:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.468 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:10:48.468 ************************************ 00:10:48.468 START TEST nvme_rpc 00:10:48.468 ************************************ 00:10:48.468 13:08:39 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:48.728 * Looking for test storage... 00:10:48.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.728 13:08:40 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:48.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.728 --rc genhtml_branch_coverage=1 00:10:48.728 --rc genhtml_function_coverage=1 00:10:48.728 --rc genhtml_legend=1 00:10:48.728 --rc geninfo_all_blocks=1 00:10:48.728 --rc geninfo_unexecuted_blocks=1 00:10:48.728 00:10:48.728 ' 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:48.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.728 --rc genhtml_branch_coverage=1 00:10:48.728 --rc genhtml_function_coverage=1 00:10:48.728 --rc genhtml_legend=1 00:10:48.728 --rc geninfo_all_blocks=1 00:10:48.728 --rc geninfo_unexecuted_blocks=1 00:10:48.728 00:10:48.728 ' 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:48.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.728 --rc genhtml_branch_coverage=1 00:10:48.728 --rc genhtml_function_coverage=1 00:10:48.728 --rc genhtml_legend=1 00:10:48.728 --rc geninfo_all_blocks=1 00:10:48.728 --rc geninfo_unexecuted_blocks=1 00:10:48.728 00:10:48.728 ' 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:48.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.728 --rc genhtml_branch_coverage=1 00:10:48.728 --rc genhtml_function_coverage=1 00:10:48.728 --rc genhtml_legend=1 00:10:48.728 --rc geninfo_all_blocks=1 00:10:48.728 --rc geninfo_unexecuted_blocks=1 00:10:48.728 00:10:48.728 ' 00:10:48.728 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.728 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:48.728 13:08:40 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:48.988 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:48.988 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68448 00:10:48.988 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:48.988 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:48.988 13:08:40 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68448 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68448 ']' 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.988 13:08:40 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.988 [2024-12-11 13:08:40.438986] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:10:48.988 [2024-12-11 13:08:40.439468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68448 ] 00:10:49.247 [2024-12-11 13:08:40.628786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.247 [2024-12-11 13:08:40.757540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.247 [2024-12-11 13:08:40.757575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.626 13:08:41 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.626 13:08:41 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:50.626 13:08:41 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:50.626 Nvme0n1 00:10:50.626 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:50.626 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:50.884 request: 00:10:50.884 { 00:10:50.884 "bdev_name": "Nvme0n1", 00:10:50.884 "filename": "non_existing_file", 00:10:50.884 "method": "bdev_nvme_apply_firmware", 00:10:50.884 "req_id": 1 00:10:50.884 } 00:10:50.884 Got JSON-RPC error response 00:10:50.884 response: 00:10:50.884 { 00:10:50.884 "code": -32603, 00:10:50.884 "message": "open file failed." 00:10:50.884 } 00:10:50.884 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:50.884 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:50.884 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:51.143 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:51.143 13:08:42 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68448 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68448 ']' 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68448 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68448 00:10:51.143 killing process with pid 68448 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68448' 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@973 -- # kill 68448 00:10:51.143 13:08:42 nvme_rpc -- common/autotest_common.sh@978 -- # wait 68448 00:10:53.678 ************************************ 00:10:53.678 END TEST nvme_rpc 00:10:53.678 ************************************ 00:10:53.678 00:10:53.678 real 0m5.048s 00:10:53.678 user 0m9.015s 00:10:53.678 sys 0m1.007s 00:10:53.678 13:08:45 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.678 13:08:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.678 13:08:45 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:53.678 13:08:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.678 13:08:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.678 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:10:53.678 ************************************ 00:10:53.678 START TEST nvme_rpc_timeouts 00:10:53.678 ************************************ 00:10:53.678 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:53.678 * Looking for test storage... 00:10:53.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:53.678 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.678 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.678 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.937 13:08:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.937 --rc genhtml_branch_coverage=1 00:10:53.937 --rc genhtml_function_coverage=1 00:10:53.937 --rc genhtml_legend=1 00:10:53.937 --rc geninfo_all_blocks=1 00:10:53.937 --rc geninfo_unexecuted_blocks=1 00:10:53.937 00:10:53.937 ' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.937 --rc genhtml_branch_coverage=1 00:10:53.937 --rc genhtml_function_coverage=1 00:10:53.937 --rc genhtml_legend=1 00:10:53.937 --rc geninfo_all_blocks=1 00:10:53.937 --rc geninfo_unexecuted_blocks=1 00:10:53.937 00:10:53.937 ' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.937 --rc genhtml_branch_coverage=1 00:10:53.937 --rc genhtml_function_coverage=1 00:10:53.937 --rc genhtml_legend=1 00:10:53.937 --rc geninfo_all_blocks=1 00:10:53.937 --rc geninfo_unexecuted_blocks=1 00:10:53.937 00:10:53.937 ' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.937 --rc genhtml_branch_coverage=1 00:10:53.937 --rc genhtml_function_coverage=1 00:10:53.937 --rc genhtml_legend=1 00:10:53.937 --rc geninfo_all_blocks=1 00:10:53.937 --rc geninfo_unexecuted_blocks=1 00:10:53.937 00:10:53.937 ' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68537 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68537 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68569 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:53.937 13:08:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68569 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 68569 ']' 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.937 13:08:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:53.938 [2024-12-11 13:08:45.443967] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:10:53.938 [2024-12-11 13:08:45.444347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68569 ] 00:10:54.197 [2024-12-11 13:08:45.632003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.197 [2024-12-11 13:08:45.761137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.197 [2024-12-11 13:08:45.761208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.576 13:08:46 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.576 Checking default timeout settings: 00:10:55.576 13:08:46 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:55.576 13:08:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:55.576 13:08:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:55.576 Making settings changes with rpc: 00:10:55.576 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:55.576 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:55.835 Check default vs. modified settings: 00:10:55.835 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:55.835 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.404 Setting action_on_timeout is changed as expected. 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.404 Setting timeout_us is changed as expected. 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.404 Setting timeout_admin_us is changed as expected. 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68537 /tmp/settings_modified_68537 00:10:56.404 13:08:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68569 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 68569 ']' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 68569 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68569 00:10:56.404 killing process with pid 68569 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68569' 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 68569 00:10:56.404 13:08:47 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 68569 00:10:58.941 RPC TIMEOUT SETTING TEST PASSED. 00:10:58.941 13:08:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:58.941 ************************************ 00:10:58.941 END TEST nvme_rpc_timeouts 00:10:58.941 ************************************ 00:10:58.941 00:10:58.941 real 0m5.342s 00:10:58.941 user 0m9.858s 00:10:58.941 sys 0m0.981s 00:10:58.941 13:08:50 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.941 13:08:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:58.941 13:08:50 -- spdk/autotest.sh@239 -- # uname -s 00:10:58.941 13:08:50 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:58.941 13:08:50 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:58.941 13:08:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:58.941 13:08:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.941 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:10:59.201 ************************************ 00:10:59.201 START TEST sw_hotplug 00:10:59.201 ************************************ 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:59.201 * Looking for test storage... 00:10:59.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.201 13:08:50 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.201 --rc genhtml_branch_coverage=1 00:10:59.201 --rc genhtml_function_coverage=1 00:10:59.201 --rc genhtml_legend=1 00:10:59.201 --rc geninfo_all_blocks=1 00:10:59.201 --rc geninfo_unexecuted_blocks=1 00:10:59.201 00:10:59.201 ' 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.201 --rc genhtml_branch_coverage=1 00:10:59.201 --rc genhtml_function_coverage=1 00:10:59.201 --rc genhtml_legend=1 00:10:59.201 --rc geninfo_all_blocks=1 00:10:59.201 --rc geninfo_unexecuted_blocks=1 00:10:59.201 00:10:59.201 ' 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.201 --rc genhtml_branch_coverage=1 00:10:59.201 --rc genhtml_function_coverage=1 00:10:59.201 --rc genhtml_legend=1 00:10:59.201 --rc geninfo_all_blocks=1 00:10:59.201 --rc geninfo_unexecuted_blocks=1 00:10:59.201 00:10:59.201 ' 00:10:59.201 13:08:50 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.201 --rc genhtml_branch_coverage=1 00:10:59.201 --rc genhtml_function_coverage=1 00:10:59.201 --rc genhtml_legend=1 00:10:59.201 --rc geninfo_all_blocks=1 00:10:59.201 --rc geninfo_unexecuted_blocks=1 00:10:59.201 00:10:59.201 ' 00:10:59.201 13:08:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:59.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:00.030 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:00.030 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:00.030 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:00.030 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:00.290 13:08:51 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:00.290 13:08:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:00.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:01.119 Waiting for block devices as requested 00:11:01.119 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.447 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.447 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.447 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.729 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:06.729 13:08:58 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:06.729 13:08:58 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:07.298 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:07.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.298 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:07.557 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:08.125 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.125 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:08.125 13:08:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=69466 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:08.125 13:08:59 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:08.125 13:08:59 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:08.125 13:08:59 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:08.125 13:08:59 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:08.125 13:08:59 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:08.125 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:08.384 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:08.384 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:08.384 13:08:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:08.384 Initializing NVMe Controllers 00:11:08.384 Attaching to 0000:00:10.0 00:11:08.384 Attaching to 0000:00:11.0 00:11:08.384 Attached to 0000:00:11.0 00:11:08.384 Attached to 0000:00:10.0 00:11:08.384 Initialization complete. Starting I/O... 00:11:08.384 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:08.384 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:08.384 00:11:09.762 QEMU NVMe Ctrl (12341 ): 1360 I/Os completed (+1360) 00:11:09.762 QEMU NVMe Ctrl (12340 ): 1361 I/Os completed (+1361) 00:11:09.762 00:11:10.699 QEMU NVMe Ctrl (12341 ): 3216 I/Os completed (+1856) 00:11:10.699 QEMU NVMe Ctrl (12340 ): 3217 I/Os completed (+1856) 00:11:10.699 00:11:11.637 QEMU NVMe Ctrl (12341 ): 5236 I/Os completed (+2020) 00:11:11.637 QEMU NVMe Ctrl (12340 ): 5237 I/Os completed (+2020) 00:11:11.637 00:11:12.574 QEMU NVMe Ctrl (12341 ): 7212 I/Os completed (+1976) 00:11:12.574 QEMU NVMe Ctrl (12340 ): 7214 I/Os completed (+1977) 00:11:12.575 00:11:13.511 QEMU NVMe Ctrl (12341 ): 9204 I/Os completed (+1992) 00:11:13.511 QEMU NVMe Ctrl (12340 ): 9206 I/Os completed (+1992) 00:11:13.511 00:11:14.448 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.448 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.448 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.448 [2024-12-11 13:09:05.699420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:14.448 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:14.448 [2024-12-11 13:09:05.701459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 [2024-12-11 13:09:05.701535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 [2024-12-11 13:09:05.701558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 [2024-12-11 13:09:05.701586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:14.448 [2024-12-11 13:09:05.704457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 [2024-12-11 13:09:05.704647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 [2024-12-11 13:09:05.704676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 [2024-12-11 13:09:05.704698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.448 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.448 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.448 [2024-12-11 13:09:05.739635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:14.448 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:14.449 [2024-12-11 13:09:05.741461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 [2024-12-11 13:09:05.741622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 [2024-12-11 13:09:05.741685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 [2024-12-11 13:09:05.741807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:14.449 [2024-12-11 13:09:05.747843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 [2024-12-11 13:09:05.748044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 [2024-12-11 13:09:05.748079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 [2024-12-11 13:09:05.748099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.449 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:14.449 EAL: Scan for (pci) bus failed. 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.449 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.449 13:09:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.449 Attaching to 0000:00:10.0 00:11:14.449 Attached to 0000:00:10.0 00:11:14.708 13:09:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.708 13:09:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.708 13:09:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:14.708 Attaching to 0000:00:11.0 00:11:14.708 Attached to 0000:00:11.0 00:11:15.646 QEMU NVMe Ctrl (12340 ): 1908 I/Os completed (+1908) 00:11:15.646 QEMU NVMe Ctrl (12341 ): 1690 I/Os completed (+1690) 00:11:15.646 00:11:16.584 QEMU NVMe Ctrl (12340 ): 3992 I/Os completed (+2084) 00:11:16.584 QEMU NVMe Ctrl (12341 ): 3774 I/Os completed (+2084) 00:11:16.584 00:11:17.523 QEMU NVMe Ctrl (12340 ): 6076 I/Os completed (+2084) 00:11:17.523 QEMU NVMe Ctrl (12341 ): 5868 I/Os completed (+2094) 00:11:17.523 00:11:18.460 QEMU NVMe Ctrl (12340 ): 8136 I/Os completed (+2060) 00:11:18.460 QEMU NVMe Ctrl (12341 ): 7931 I/Os completed (+2063) 00:11:18.460 00:11:19.398 QEMU NVMe Ctrl (12340 ): 10124 I/Os completed (+1988) 00:11:19.398 QEMU NVMe Ctrl (12341 ): 9920 I/Os completed (+1989) 00:11:19.398 00:11:20.352 QEMU NVMe Ctrl (12340 ): 12104 I/Os completed (+1980) 00:11:20.352 QEMU NVMe Ctrl (12341 ): 11900 I/Os completed (+1980) 00:11:20.352 00:11:21.731 QEMU NVMe Ctrl (12340 ): 14104 I/Os completed (+2000) 00:11:21.731 QEMU NVMe Ctrl (12341 ): 13903 I/Os completed (+2003) 00:11:21.731 00:11:22.668 QEMU NVMe Ctrl (12340 ): 16120 I/Os completed (+2016) 00:11:22.668 QEMU NVMe Ctrl (12341 ): 15919 I/Os completed (+2016) 00:11:22.668 00:11:23.605 QEMU NVMe Ctrl (12340 ): 18148 I/Os completed (+2028) 00:11:23.605 QEMU NVMe Ctrl (12341 ): 17947 I/Os completed (+2028) 00:11:23.605 00:11:24.540 QEMU NVMe Ctrl (12340 ): 20160 I/Os completed (+2012) 00:11:24.540 QEMU NVMe Ctrl (12341 ): 19959 I/Os completed (+2012) 00:11:24.540 00:11:25.477 QEMU NVMe Ctrl (12340 ): 22176 I/Os completed (+2016) 00:11:25.477 QEMU NVMe Ctrl (12341 ): 21975 I/Os completed (+2016) 00:11:25.477 00:11:26.417 QEMU NVMe Ctrl (12340 ): 24188 I/Os completed (+2012) 00:11:26.417 QEMU NVMe Ctrl (12341 ): 23987 I/Os completed (+2012) 00:11:26.417 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.676 [2024-12-11 13:09:18.079148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.676 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:26.676 [2024-12-11 13:09:18.081110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.081313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.081371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.081507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:26.676 [2024-12-11 13:09:18.087883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.088021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.088073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.088258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.676 [2024-12-11 13:09:18.119370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.676 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:26.676 [2024-12-11 13:09:18.121206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.121351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.121414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.121522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:26.676 [2024-12-11 13:09:18.124423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.124551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.124606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 [2024-12-11 13:09:18.124691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:26.676 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:26.676 EAL: Scan for (pci) bus failed. 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.676 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.936 Attaching to 0000:00:10.0 00:11:26.936 Attached to 0000:00:10.0 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.936 13:09:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.936 Attaching to 0000:00:11.0 00:11:26.936 Attached to 0000:00:11.0 00:11:27.504 QEMU NVMe Ctrl (12340 ): 1128 I/Os completed (+1128) 00:11:27.504 QEMU NVMe Ctrl (12341 ): 924 I/Os completed (+924) 00:11:27.504 00:11:28.441 QEMU NVMe Ctrl (12340 ): 3108 I/Os completed (+1980) 00:11:28.441 QEMU NVMe Ctrl (12341 ): 2904 I/Os completed (+1980) 00:11:28.441 00:11:29.379 QEMU NVMe Ctrl (12340 ): 5096 I/Os completed (+1988) 00:11:29.379 QEMU NVMe Ctrl (12341 ): 4892 I/Os completed (+1988) 00:11:29.379 00:11:30.758 QEMU NVMe Ctrl (12340 ): 7092 I/Os completed (+1996) 00:11:30.758 QEMU NVMe Ctrl (12341 ): 6888 I/Os completed (+1996) 00:11:30.758 00:11:31.697 QEMU NVMe Ctrl (12340 ): 9060 I/Os completed (+1968) 00:11:31.697 QEMU NVMe Ctrl (12341 ): 8856 I/Os completed (+1968) 00:11:31.697 00:11:32.635 QEMU NVMe Ctrl (12340 ): 11012 I/Os completed (+1952) 00:11:32.635 QEMU NVMe Ctrl (12341 ): 10808 I/Os completed (+1952) 00:11:32.635 00:11:33.571 QEMU NVMe Ctrl (12340 ): 12968 I/Os completed (+1956) 00:11:33.571 QEMU NVMe Ctrl (12341 ): 12764 I/Os completed (+1956) 00:11:33.571 00:11:34.506 QEMU NVMe Ctrl (12340 ): 14952 I/Os completed (+1984) 00:11:34.506 QEMU NVMe Ctrl (12341 ): 14748 I/Os completed (+1984) 00:11:34.506 00:11:35.441 QEMU NVMe Ctrl (12340 ): 16932 I/Os completed (+1980) 00:11:35.441 QEMU NVMe Ctrl (12341 ): 16728 I/Os completed (+1980) 00:11:35.441 00:11:36.376 QEMU NVMe Ctrl (12340 ): 18916 I/Os completed (+1984) 00:11:36.376 QEMU NVMe Ctrl (12341 ): 18712 I/Os completed (+1984) 00:11:36.376 00:11:37.349 QEMU NVMe Ctrl (12340 ): 20896 I/Os completed (+1980) 00:11:37.349 QEMU NVMe Ctrl (12341 ): 20692 I/Os completed (+1980) 00:11:37.349 00:11:38.726 QEMU NVMe Ctrl (12340 ): 22844 I/Os completed (+1948) 00:11:38.726 QEMU NVMe Ctrl (12341 ): 22640 I/Os completed (+1948) 00:11:38.726 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.984 [2024-12-11 13:09:30.442885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.984 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:38.984 [2024-12-11 13:09:30.445686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.445886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.446075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.446186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:38.984 [2024-12-11 13:09:30.449779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.449949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.450030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.450181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.984 [2024-12-11 13:09:30.483534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.984 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:38.984 [2024-12-11 13:09:30.485563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.485630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.485669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.485701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:38.984 [2024-12-11 13:09:30.488640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.488691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.488730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 [2024-12-11 13:09:30.488758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:38.984 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:38.984 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:38.984 EAL: Scan for (pci) bus failed. 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.242 Attaching to 0000:00:10.0 00:11:39.242 Attached to 0000:00:10.0 00:11:39.242 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.501 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.501 13:09:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.501 Attaching to 0000:00:11.0 00:11:39.501 Attached to 0000:00:11.0 00:11:39.501 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:39.501 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:39.501 [2024-12-11 13:09:30.824502] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:51.706 13:09:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:51.706 13:09:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.706 13:09:42 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.13 00:11:51.706 13:09:42 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.13 00:11:51.706 13:09:42 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:51.706 13:09:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.13 00:11:51.706 13:09:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.13 2 00:11:51.706 remove_attach_helper took 43.13s to complete (handling 2 nvme drive(s)) 13:09:42 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 69466 00:11:58.308 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (69466) - No such process 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 69466 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=70011 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:58.308 13:09:48 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 70011 00:11:58.308 13:09:48 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 70011 ']' 00:11:58.308 13:09:48 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.308 13:09:48 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.308 13:09:48 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.308 13:09:48 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.308 13:09:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.308 [2024-12-11 13:09:48.963537] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:11:58.308 [2024-12-11 13:09:48.964248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70011 ] 00:11:58.308 [2024-12-11 13:09:49.153960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.308 [2024-12-11 13:09:49.293719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:58.877 13:09:50 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:58.877 13:09:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.448 13:09:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 13:09:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 [2024-12-11 13:09:56.419837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.448 [2024-12-11 13:09:56.422600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.422648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.422668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 [2024-12-11 13:09:56.422700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.422712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.422730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 [2024-12-11 13:09:56.422744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.422759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.422771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 [2024-12-11 13:09:56.422791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.422802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.422817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 13:09:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.448 [2024-12-11 13:09:56.919033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:05.448 [2024-12-11 13:09:56.921806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.921857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.921896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 [2024-12-11 13:09:56.921925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.921940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.921953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 [2024-12-11 13:09:56.921971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.921983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.921998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 [2024-12-11 13:09:56.922012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.448 [2024-12-11 13:09:56.922028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.448 [2024-12-11 13:09:56.922040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.448 13:09:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.448 13:09:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.448 13:09:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.448 13:09:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.708 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.968 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.968 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.968 13:09:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.194 13:10:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.194 13:10:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.194 13:10:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.194 [2024-12-11 13:10:09.398926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:18.194 [2024-12-11 13:10:09.402454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.194 [2024-12-11 13:10:09.402506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.194 [2024-12-11 13:10:09.402526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.194 [2024-12-11 13:10:09.402557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.194 [2024-12-11 13:10:09.402569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.194 [2024-12-11 13:10:09.402584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.194 [2024-12-11 13:10:09.402598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.194 [2024-12-11 13:10:09.402613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.194 [2024-12-11 13:10:09.402625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.194 [2024-12-11 13:10:09.402642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.194 [2024-12-11 13:10:09.402653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.194 [2024-12-11 13:10:09.402668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.194 13:10:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.194 13:10:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.194 13:10:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:18.194 13:10:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:18.453 [2024-12-11 13:10:09.898138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:18.453 [2024-12-11 13:10:09.901015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.453 [2024-12-11 13:10:09.901063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.453 [2024-12-11 13:10:09.901106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.453 [2024-12-11 13:10:09.901148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.454 [2024-12-11 13:10:09.901166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.454 [2024-12-11 13:10:09.901179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.454 [2024-12-11 13:10:09.901196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.454 [2024-12-11 13:10:09.901208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.454 [2024-12-11 13:10:09.901224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.454 [2024-12-11 13:10:09.901237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.454 [2024-12-11 13:10:09.901252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.454 [2024-12-11 13:10:09.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.454 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:18.454 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.454 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.454 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.454 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.454 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.454 13:10:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.454 13:10:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.713 13:10:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.713 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.972 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.972 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.972 13:10:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.215 13:10:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.215 13:10:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.215 13:10:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:31.215 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:31.216 [2024-12-11 13:10:22.477890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:31.216 [2024-12-11 13:10:22.480853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.216 [2024-12-11 13:10:22.481015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.216 [2024-12-11 13:10:22.481190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.216 [2024-12-11 13:10:22.481266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.216 [2024-12-11 13:10:22.481428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.216 [2024-12-11 13:10:22.481544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.216 [2024-12-11 13:10:22.481580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.216 [2024-12-11 13:10:22.481595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.216 [2024-12-11 13:10:22.481607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.216 [2024-12-11 13:10:22.481624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.216 [2024-12-11 13:10:22.481636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.216 [2024-12-11 13:10:22.481651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.216 13:10:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.216 13:10:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.216 13:10:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:31.216 13:10:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:31.475 [2024-12-11 13:10:22.877258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:31.475 [2024-12-11 13:10:22.879829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.475 [2024-12-11 13:10:22.879870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.475 [2024-12-11 13:10:22.879900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.475 [2024-12-11 13:10:22.879923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.475 [2024-12-11 13:10:22.879939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.475 [2024-12-11 13:10:22.879951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.475 [2024-12-11 13:10:22.879969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.475 [2024-12-11 13:10:22.879980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.475 [2024-12-11 13:10:22.880002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.475 [2024-12-11 13:10:22.880015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.475 [2024-12-11 13:10:22.880031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.475 [2024-12-11 13:10:22.880043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.475 [2024-12-11 13:10:22.880065] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:12:31.475 [2024-12-11 13:10:22.880079] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:12:31.475 [2024-12-11 13:10:22.880094] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:12:31.475 [2024-12-11 13:10:22.880105] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:12:31.475 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:31.475 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:31.475 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:31.734 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.734 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.734 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.734 13:10:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.734 13:10:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.734 13:10:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.735 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:31.735 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:31.735 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.735 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.735 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:31.735 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.993 13:10:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.14 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.14 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:12:44.202 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:44.202 13:10:35 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:44.202 13:10:35 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.773 13:10:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.773 13:10:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.773 [2024-12-11 13:10:41.595320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:50.773 [2024-12-11 13:10:41.597572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.597614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.597632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 [2024-12-11 13:10:41.597664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.597677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.597695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 [2024-12-11 13:10:41.597708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.597728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.597740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 [2024-12-11 13:10:41.597757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.597769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.597784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 13:10:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:50.773 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.773 [2024-12-11 13:10:41.994673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:50.773 [2024-12-11 13:10:41.996483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.996642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.996673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 [2024-12-11 13:10:41.996699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.996715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.996728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 [2024-12-11 13:10:41.996749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.996760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.996776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 [2024-12-11 13:10:41.996790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.773 [2024-12-11 13:10:41.996809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.773 [2024-12-11 13:10:41.996821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.773 13:10:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.773 13:10:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.773 13:10:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.773 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.032 13:10:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.263 13:10:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.263 13:10:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.263 13:10:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.263 13:10:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.263 13:10:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.263 [2024-12-11 13:10:54.674271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:03.263 [2024-12-11 13:10:54.676682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.263 [2024-12-11 13:10:54.676724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.263 [2024-12-11 13:10:54.676743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.263 [2024-12-11 13:10:54.676772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.263 [2024-12-11 13:10:54.676786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.263 [2024-12-11 13:10:54.676802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.263 [2024-12-11 13:10:54.676816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.263 [2024-12-11 13:10:54.676830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.263 [2024-12-11 13:10:54.676842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.263 [2024-12-11 13:10:54.676859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.263 [2024-12-11 13:10:54.676871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.263 [2024-12-11 13:10:54.676886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.263 13:10:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:03.263 13:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:03.522 [2024-12-11 13:10:55.073627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:03.522 [2024-12-11 13:10:55.076119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.522 [2024-12-11 13:10:55.076194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.522 [2024-12-11 13:10:55.076217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.522 [2024-12-11 13:10:55.076241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.522 [2024-12-11 13:10:55.076256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.522 [2024-12-11 13:10:55.076269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.522 [2024-12-11 13:10:55.076296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.522 [2024-12-11 13:10:55.076308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.522 [2024-12-11 13:10:55.076323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.522 [2024-12-11 13:10:55.076337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.522 [2024-12-11 13:10:55.076351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.522 [2024-12-11 13:10:55.076363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.781 13:10:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.781 13:10:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.781 13:10:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:03.781 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:04.039 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.039 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.039 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:04.039 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:04.039 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.039 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.040 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.040 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:04.040 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:04.040 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.040 13:10:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.255 13:11:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.255 13:11:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.255 13:11:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.255 [2024-12-11 13:11:07.653382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:16.255 [2024-12-11 13:11:07.655518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.255 [2024-12-11 13:11:07.655674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.255 [2024-12-11 13:11:07.655830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.255 [2024-12-11 13:11:07.655965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.255 [2024-12-11 13:11:07.656009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.255 [2024-12-11 13:11:07.656131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.255 [2024-12-11 13:11:07.656308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.255 [2024-12-11 13:11:07.656331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.255 [2024-12-11 13:11:07.656344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.255 [2024-12-11 13:11:07.656362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.255 [2024-12-11 13:11:07.656374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.255 [2024-12-11 13:11:07.656390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.255 13:11:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.255 13:11:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.255 13:11:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:16.255 13:11:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:16.514 [2024-12-11 13:11:08.052724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:16.514 [2024-12-11 13:11:08.055175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.514 [2024-12-11 13:11:08.055225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.514 [2024-12-11 13:11:08.055262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.514 [2024-12-11 13:11:08.055283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.514 [2024-12-11 13:11:08.055298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.514 [2024-12-11 13:11:08.055310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.514 [2024-12-11 13:11:08.055332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.514 [2024-12-11 13:11:08.055344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.514 [2024-12-11 13:11:08.055362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.514 [2024-12-11 13:11:08.055376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.514 [2024-12-11 13:11:08.055390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.514 [2024-12-11 13:11:08.055402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.773 13:11:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.773 13:11:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.773 13:11:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:16.773 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:17.032 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:17.292 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:17.292 13:11:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:13:29.500 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:29.500 13:11:20 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 70011 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 70011 ']' 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 70011 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70011 00:13:29.500 killing process with pid 70011 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70011' 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@973 -- # kill 70011 00:13:29.500 13:11:20 sw_hotplug -- common/autotest_common.sh@978 -- # wait 70011 00:13:32.036 13:11:23 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:32.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:33.172 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:33.172 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:33.172 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.172 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.172 00:13:33.172 real 2m34.200s 00:13:33.172 user 1m53.327s 00:13:33.172 sys 0m21.165s 00:13:33.172 ************************************ 00:13:33.172 END TEST sw_hotplug 00:13:33.172 ************************************ 00:13:33.172 13:11:24 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.172 13:11:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 13:11:24 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:33.431 13:11:24 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:33.431 13:11:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.431 13:11:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.431 13:11:24 -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 ************************************ 00:13:33.431 START TEST nvme_xnvme 00:13:33.431 ************************************ 00:13:33.431 13:11:24 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:33.431 * Looking for test storage... 00:13:33.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.431 13:11:24 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.431 13:11:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.431 13:11:24 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.700 13:11:25 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.700 --rc genhtml_branch_coverage=1 00:13:33.700 --rc genhtml_function_coverage=1 00:13:33.700 --rc genhtml_legend=1 00:13:33.700 --rc geninfo_all_blocks=1 00:13:33.700 --rc geninfo_unexecuted_blocks=1 00:13:33.700 00:13:33.700 ' 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.700 --rc genhtml_branch_coverage=1 00:13:33.700 --rc genhtml_function_coverage=1 00:13:33.700 --rc genhtml_legend=1 00:13:33.700 --rc geninfo_all_blocks=1 00:13:33.700 --rc geninfo_unexecuted_blocks=1 00:13:33.700 00:13:33.700 ' 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.700 --rc genhtml_branch_coverage=1 00:13:33.700 --rc genhtml_function_coverage=1 00:13:33.700 --rc genhtml_legend=1 00:13:33.700 --rc geninfo_all_blocks=1 00:13:33.700 --rc geninfo_unexecuted_blocks=1 00:13:33.700 00:13:33.700 ' 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.700 --rc genhtml_branch_coverage=1 00:13:33.700 --rc genhtml_function_coverage=1 00:13:33.700 --rc genhtml_legend=1 00:13:33.700 --rc geninfo_all_blocks=1 00:13:33.700 --rc geninfo_unexecuted_blocks=1 00:13:33.700 00:13:33.700 ' 00:13:33.700 13:11:25 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:33.700 13:11:25 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:33.700 13:11:25 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:33.700 13:11:25 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:33.700 13:11:25 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:33.700 #define SPDK_CONFIG_H 00:13:33.700 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:33.700 #define SPDK_CONFIG_APPS 1 00:13:33.700 #define SPDK_CONFIG_ARCH native 00:13:33.700 #define SPDK_CONFIG_ASAN 1 00:13:33.700 #undef SPDK_CONFIG_AVAHI 00:13:33.700 #undef SPDK_CONFIG_CET 00:13:33.700 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:33.700 #define SPDK_CONFIG_COVERAGE 1 00:13:33.700 #define SPDK_CONFIG_CROSS_PREFIX 00:13:33.700 #undef SPDK_CONFIG_CRYPTO 00:13:33.700 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:33.700 #undef SPDK_CONFIG_CUSTOMOCF 00:13:33.700 #undef SPDK_CONFIG_DAOS 00:13:33.700 #define SPDK_CONFIG_DAOS_DIR 00:13:33.700 #define SPDK_CONFIG_DEBUG 1 00:13:33.700 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:33.700 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:33.700 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:33.700 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:33.700 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:33.700 #undef SPDK_CONFIG_DPDK_UADK 00:13:33.700 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:33.700 #define SPDK_CONFIG_EXAMPLES 1 00:13:33.700 #undef SPDK_CONFIG_FC 00:13:33.700 #define SPDK_CONFIG_FC_PATH 00:13:33.700 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:33.701 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:33.701 #define SPDK_CONFIG_FSDEV 1 00:13:33.701 #undef SPDK_CONFIG_FUSE 00:13:33.701 #undef SPDK_CONFIG_FUZZER 00:13:33.701 #define SPDK_CONFIG_FUZZER_LIB 00:13:33.701 #undef SPDK_CONFIG_GOLANG 00:13:33.701 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:33.701 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:33.701 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:33.701 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:33.701 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:33.701 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:33.701 #undef SPDK_CONFIG_HAVE_LZ4 00:13:33.701 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:33.701 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:33.701 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:33.701 #define SPDK_CONFIG_IDXD 1 00:13:33.701 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:33.701 #undef SPDK_CONFIG_IPSEC_MB 00:13:33.701 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:33.701 #define SPDK_CONFIG_ISAL 1 00:13:33.701 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:33.701 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:33.701 #define SPDK_CONFIG_LIBDIR 00:13:33.701 #undef SPDK_CONFIG_LTO 00:13:33.701 #define SPDK_CONFIG_MAX_LCORES 128 00:13:33.701 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:33.701 #define SPDK_CONFIG_NVME_CUSE 1 00:13:33.701 #undef SPDK_CONFIG_OCF 00:13:33.701 #define SPDK_CONFIG_OCF_PATH 00:13:33.701 #define SPDK_CONFIG_OPENSSL_PATH 00:13:33.701 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:33.701 #define SPDK_CONFIG_PGO_DIR 00:13:33.701 #undef SPDK_CONFIG_PGO_USE 00:13:33.701 #define SPDK_CONFIG_PREFIX /usr/local 00:13:33.701 #undef SPDK_CONFIG_RAID5F 00:13:33.701 #undef SPDK_CONFIG_RBD 00:13:33.701 #define SPDK_CONFIG_RDMA 1 00:13:33.701 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:33.701 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:33.701 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:33.701 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:33.701 #define SPDK_CONFIG_SHARED 1 00:13:33.701 #undef SPDK_CONFIG_SMA 00:13:33.701 #define SPDK_CONFIG_TESTS 1 00:13:33.701 #undef SPDK_CONFIG_TSAN 00:13:33.701 #define SPDK_CONFIG_UBLK 1 00:13:33.701 #define SPDK_CONFIG_UBSAN 1 00:13:33.701 #undef SPDK_CONFIG_UNIT_TESTS 00:13:33.701 #undef SPDK_CONFIG_URING 00:13:33.701 #define SPDK_CONFIG_URING_PATH 00:13:33.701 #undef SPDK_CONFIG_URING_ZNS 00:13:33.701 #undef SPDK_CONFIG_USDT 00:13:33.701 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:33.701 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:33.701 #undef SPDK_CONFIG_VFIO_USER 00:13:33.701 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:33.701 #define SPDK_CONFIG_VHOST 1 00:13:33.701 #define SPDK_CONFIG_VIRTIO 1 00:13:33.701 #undef SPDK_CONFIG_VTUNE 00:13:33.701 #define SPDK_CONFIG_VTUNE_DIR 00:13:33.701 #define SPDK_CONFIG_WERROR 1 00:13:33.701 #define SPDK_CONFIG_WPDK_DIR 00:13:33.701 #define SPDK_CONFIG_XNVME 1 00:13:33.701 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:33.701 13:11:25 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.701 13:11:25 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.701 13:11:25 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.701 13:11:25 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.701 13:11:25 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.701 13:11:25 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.701 13:11:25 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.701 13:11:25 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.701 13:11:25 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:33.701 13:11:25 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:33.701 13:11:25 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:33.701 13:11:25 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 71358 ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 71358 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.bV7q3n 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.bV7q3n/tests/xnvme /tmp/spdk.bV7q3n 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971423232 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596233728 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971423232 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596233728 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96759107584 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2943672320 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:33.702 * Looking for test storage... 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971423232 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.702 13:11:25 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.964 --rc genhtml_branch_coverage=1 00:13:33.964 --rc genhtml_function_coverage=1 00:13:33.964 --rc genhtml_legend=1 00:13:33.964 --rc geninfo_all_blocks=1 00:13:33.964 --rc geninfo_unexecuted_blocks=1 00:13:33.964 00:13:33.964 ' 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.964 --rc genhtml_branch_coverage=1 00:13:33.964 --rc genhtml_function_coverage=1 00:13:33.964 --rc genhtml_legend=1 00:13:33.964 --rc geninfo_all_blocks=1 00:13:33.964 --rc geninfo_unexecuted_blocks=1 00:13:33.964 00:13:33.964 ' 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.964 --rc genhtml_branch_coverage=1 00:13:33.964 --rc genhtml_function_coverage=1 00:13:33.964 --rc genhtml_legend=1 00:13:33.964 --rc geninfo_all_blocks=1 00:13:33.964 --rc geninfo_unexecuted_blocks=1 00:13:33.964 00:13:33.964 ' 00:13:33.964 13:11:25 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.964 --rc genhtml_branch_coverage=1 00:13:33.964 --rc genhtml_function_coverage=1 00:13:33.964 --rc genhtml_legend=1 00:13:33.964 --rc geninfo_all_blocks=1 00:13:33.964 --rc geninfo_unexecuted_blocks=1 00:13:33.964 00:13:33.964 ' 00:13:33.964 13:11:25 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.964 13:11:25 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.964 13:11:25 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.964 13:11:25 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.965 13:11:25 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.965 13:11:25 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:33.965 13:11:25 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:33.965 13:11:25 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:34.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.791 Waiting for block devices as requested 00:13:34.791 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.050 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.050 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.050 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.326 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:40.326 13:11:31 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:40.585 13:11:32 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:40.585 13:11:32 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:40.844 13:11:32 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:40.844 13:11:32 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:40.844 13:11:32 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:40.844 13:11:32 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:40.844 13:11:32 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:40.844 No valid GPT data, bailing 00:13:40.845 13:11:32 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:40.845 13:11:32 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:40.845 13:11:32 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:40.845 13:11:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:40.845 13:11:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.845 13:11:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.845 13:11:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.104 ************************************ 00:13:41.104 START TEST xnvme_rpc 00:13:41.104 ************************************ 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71759 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71759 00:13:41.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71759 ']' 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.104 13:11:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.104 [2024-12-11 13:11:32.516239] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:13:41.104 [2024-12-11 13:11:32.516384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:13:41.364 [2024-12-11 13:11:32.703908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.364 [2024-12-11 13:11:32.842323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.302 xnvme_bdev 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.302 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.561 13:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.561 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71759 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71759 ']' 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71759 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71759 00:13:42.562 killing process with pid 71759 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71759' 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71759 00:13:42.562 13:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71759 00:13:45.853 ************************************ 00:13:45.853 END TEST xnvme_rpc 00:13:45.853 00:13:45.853 real 0m4.256s 00:13:45.853 user 0m4.126s 00:13:45.853 sys 0m0.717s 00:13:45.853 13:11:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.853 13:11:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.853 ************************************ 00:13:45.853 13:11:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:45.853 13:11:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.853 13:11:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.853 13:11:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.853 ************************************ 00:13:45.853 START TEST xnvme_bdevperf 00:13:45.853 ************************************ 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:45.853 13:11:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.853 { 00:13:45.853 "subsystems": [ 00:13:45.853 { 00:13:45.853 "subsystem": "bdev", 00:13:45.853 "config": [ 00:13:45.853 { 00:13:45.853 "params": { 00:13:45.853 "io_mechanism": "libaio", 00:13:45.853 "conserve_cpu": false, 00:13:45.853 "filename": "/dev/nvme0n1", 00:13:45.853 "name": "xnvme_bdev" 00:13:45.853 }, 00:13:45.853 "method": "bdev_xnvme_create" 00:13:45.853 }, 00:13:45.853 { 00:13:45.853 "method": "bdev_wait_for_examine" 00:13:45.853 } 00:13:45.853 ] 00:13:45.853 } 00:13:45.853 ] 00:13:45.853 } 00:13:45.853 [2024-12-11 13:11:36.851784] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:13:45.853 [2024-12-11 13:11:36.851929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71844 ] 00:13:45.853 [2024-12-11 13:11:37.034817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.853 [2024-12-11 13:11:37.167242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.125 Running I/O for 5 seconds... 00:13:48.440 44266.00 IOPS, 172.91 MiB/s [2024-12-11T13:11:40.950Z] 44426.50 IOPS, 173.54 MiB/s [2024-12-11T13:11:41.888Z] 44015.33 IOPS, 171.93 MiB/s [2024-12-11T13:11:42.825Z] 43793.00 IOPS, 171.07 MiB/s [2024-12-11T13:11:42.825Z] 44141.20 IOPS, 172.43 MiB/s 00:13:51.257 Latency(us) 00:13:51.257 [2024-12-11T13:11:42.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.257 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:51.257 xnvme_bdev : 5.00 44120.12 172.34 0.00 0.00 1447.37 394.80 12844.00 00:13:51.257 [2024-12-11T13:11:42.825Z] =================================================================================================================== 00:13:51.257 [2024-12-11T13:11:42.825Z] Total : 44120.12 172.34 0.00 0.00 1447.37 394.80 12844.00 00:13:52.636 13:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:52.636 13:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:52.636 13:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:52.636 13:11:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:52.636 13:11:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:52.636 { 00:13:52.636 "subsystems": [ 00:13:52.636 { 00:13:52.636 "subsystem": "bdev", 00:13:52.636 "config": [ 00:13:52.636 { 00:13:52.636 "params": { 00:13:52.636 "io_mechanism": "libaio", 00:13:52.636 "conserve_cpu": false, 00:13:52.636 "filename": "/dev/nvme0n1", 00:13:52.636 "name": "xnvme_bdev" 00:13:52.636 }, 00:13:52.636 "method": "bdev_xnvme_create" 00:13:52.636 }, 00:13:52.636 { 00:13:52.636 "method": "bdev_wait_for_examine" 00:13:52.636 } 00:13:52.636 ] 00:13:52.636 } 00:13:52.636 ] 00:13:52.636 } 00:13:52.636 [2024-12-11 13:11:43.936030] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:13:52.636 [2024-12-11 13:11:43.936359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71925 ] 00:13:52.636 [2024-12-11 13:11:44.121301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.895 [2024-12-11 13:11:44.266395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.154 Running I/O for 5 seconds... 00:13:55.474 48421.00 IOPS, 189.14 MiB/s [2024-12-11T13:11:47.980Z] 45600.50 IOPS, 178.13 MiB/s [2024-12-11T13:11:48.917Z] 46096.33 IOPS, 180.06 MiB/s [2024-12-11T13:11:49.871Z] 44594.75 IOPS, 174.20 MiB/s 00:13:58.303 Latency(us) 00:13:58.303 [2024-12-11T13:11:49.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.303 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:58.303 xnvme_bdev : 5.00 43119.16 168.43 0.00 0.00 1480.71 172.72 3342.60 00:13:58.303 [2024-12-11T13:11:49.871Z] =================================================================================================================== 00:13:58.303 [2024-12-11T13:11:49.871Z] Total : 43119.16 168.43 0.00 0.00 1480.71 172.72 3342.60 00:13:59.682 00:13:59.682 real 0m14.189s 00:13:59.682 user 0m5.237s 00:13:59.682 sys 0m6.104s 00:13:59.682 13:11:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.682 13:11:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.682 ************************************ 00:13:59.682 END TEST xnvme_bdevperf 00:13:59.682 ************************************ 00:13:59.682 13:11:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:59.682 13:11:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.682 13:11:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.682 13:11:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.682 ************************************ 00:13:59.682 START TEST xnvme_fio_plugin 00:13:59.682 ************************************ 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:59.682 { 00:13:59.682 "subsystems": [ 00:13:59.682 { 00:13:59.682 "subsystem": "bdev", 00:13:59.682 "config": [ 00:13:59.682 { 00:13:59.682 "params": { 00:13:59.682 "io_mechanism": "libaio", 00:13:59.682 "conserve_cpu": false, 00:13:59.682 "filename": "/dev/nvme0n1", 00:13:59.682 "name": "xnvme_bdev" 00:13:59.682 }, 00:13:59.682 "method": "bdev_xnvme_create" 00:13:59.682 }, 00:13:59.682 { 00:13:59.682 "method": "bdev_wait_for_examine" 00:13:59.682 } 00:13:59.682 ] 00:13:59.682 } 00:13:59.682 ] 00:13:59.682 } 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:59.682 13:11:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.941 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:59.941 fio-3.35 00:13:59.941 Starting 1 thread 00:14:06.511 00:14:06.511 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72051: Wed Dec 11 13:11:57 2024 00:14:06.511 read: IOPS=44.1k, BW=172MiB/s (181MB/s)(861MiB/5001msec) 00:14:06.511 slat (usec): min=4, max=1010, avg=20.17, stdev=19.57 00:14:06.511 clat (usec): min=81, max=8161, avg=830.68, stdev=497.55 00:14:06.511 lat (usec): min=122, max=8179, avg=850.85, stdev=499.90 00:14:06.511 clat percentiles (usec): 00:14:06.511 | 1.00th=[ 153], 5.00th=[ 223], 10.00th=[ 293], 20.00th=[ 416], 00:14:06.511 | 30.00th=[ 537], 40.00th=[ 652], 50.00th=[ 775], 60.00th=[ 889], 00:14:06.511 | 70.00th=[ 1020], 80.00th=[ 1156], 90.00th=[ 1352], 95.00th=[ 1565], 00:14:06.511 | 99.00th=[ 2704], 99.50th=[ 3261], 99.90th=[ 4228], 99.95th=[ 4424], 00:14:06.511 | 99.99th=[ 5014] 00:14:06.511 bw ( KiB/s): min=162208, max=196624, per=100.00%, avg=177473.89, stdev=11470.09, samples=9 00:14:06.511 iops : min=40552, max=49156, avg=44368.44, stdev=2867.56, samples=9 00:14:06.511 lat (usec) : 100=0.07%, 250=6.90%, 500=20.11%, 750=21.23%, 1000=20.23% 00:14:06.511 lat (msec) : 2=29.02%, 4=2.28%, 10=0.16% 00:14:06.511 cpu : usr=21.96%, sys=53.98%, ctx=137, majf=0, minf=764 00:14:06.511 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=12.0%, 16=26.3%, 32=54.0%, >=64=1.7% 00:14:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.511 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:06.511 issued rwts: total=220497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:06.511 00:14:06.511 Run status group 0 (all jobs): 00:14:06.511 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=861MiB (903MB), run=5001-5001msec 00:14:07.080 ----------------------------------------------------- 00:14:07.080 Suppressions used: 00:14:07.080 count bytes template 00:14:07.080 1 11 /usr/src/fio/parse.c 00:14:07.080 1 8 libtcmalloc_minimal.so 00:14:07.080 1 904 libcrypto.so 00:14:07.080 ----------------------------------------------------- 00:14:07.080 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:07.080 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:07.339 13:11:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.339 { 00:14:07.339 "subsystems": [ 00:14:07.339 { 00:14:07.339 "subsystem": "bdev", 00:14:07.339 "config": [ 00:14:07.339 { 00:14:07.339 "params": { 00:14:07.339 "io_mechanism": "libaio", 00:14:07.339 "conserve_cpu": false, 00:14:07.339 "filename": "/dev/nvme0n1", 00:14:07.339 "name": "xnvme_bdev" 00:14:07.339 }, 00:14:07.339 "method": "bdev_xnvme_create" 00:14:07.339 }, 00:14:07.339 { 00:14:07.339 "method": "bdev_wait_for_examine" 00:14:07.339 } 00:14:07.339 ] 00:14:07.339 } 00:14:07.339 ] 00:14:07.339 } 00:14:07.340 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:07.340 fio-3.35 00:14:07.340 Starting 1 thread 00:14:14.001 00:14:14.001 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72154: Wed Dec 11 13:12:04 2024 00:14:14.001 write: IOPS=45.6k, BW=178MiB/s (187MB/s)(891MiB/5001msec); 0 zone resets 00:14:14.001 slat (usec): min=4, max=848, avg=19.13, stdev=22.89 00:14:14.001 clat (usec): min=92, max=5879, avg=831.07, stdev=510.77 00:14:14.001 lat (usec): min=139, max=5908, avg=850.20, stdev=514.52 00:14:14.001 clat percentiles (usec): 00:14:14.001 | 1.00th=[ 182], 5.00th=[ 255], 10.00th=[ 322], 20.00th=[ 445], 00:14:14.001 | 30.00th=[ 553], 40.00th=[ 660], 50.00th=[ 758], 60.00th=[ 865], 00:14:14.001 | 70.00th=[ 979], 80.00th=[ 1106], 90.00th=[ 1303], 95.00th=[ 1565], 00:14:14.001 | 99.00th=[ 3032], 99.50th=[ 3556], 99.90th=[ 4293], 99.95th=[ 4555], 00:14:14.001 | 99.99th=[ 4948] 00:14:14.001 bw ( KiB/s): min=176416, max=190168, per=99.89%, avg=182291.44, stdev=4328.33, samples=9 00:14:14.001 iops : min=44104, max=47542, avg=45572.78, stdev=1081.98, samples=9 00:14:14.001 lat (usec) : 100=0.02%, 250=4.60%, 500=20.59%, 750=23.66%, 1000=23.24% 00:14:14.001 lat (msec) : 2=24.82%, 4=2.83%, 10=0.23% 00:14:14.001 cpu : usr=26.28%, sys=51.70%, ctx=97, majf=0, minf=765 00:14:14.001 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=10.9%, 16=26.1%, 32=56.2%, >=64=1.8% 00:14:14.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.001 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:14.001 issued rwts: total=0,228167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.001 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:14.001 00:14:14.001 Run status group 0 (all jobs): 00:14:14.001 WRITE: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=891MiB (935MB), run=5001-5001msec 00:14:14.937 ----------------------------------------------------- 00:14:14.937 Suppressions used: 00:14:14.937 count bytes template 00:14:14.937 1 11 /usr/src/fio/parse.c 00:14:14.937 1 8 libtcmalloc_minimal.so 00:14:14.937 1 904 libcrypto.so 00:14:14.937 ----------------------------------------------------- 00:14:14.937 00:14:14.937 00:14:14.937 real 0m15.244s 00:14:14.937 user 0m6.381s 00:14:14.937 sys 0m6.230s 00:14:14.937 13:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.937 13:12:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:14.937 ************************************ 00:14:14.937 END TEST xnvme_fio_plugin 00:14:14.937 ************************************ 00:14:14.937 13:12:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:14.937 13:12:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:14.937 13:12:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:14.937 13:12:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:14.937 13:12:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.937 13:12:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.937 13:12:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.937 ************************************ 00:14:14.937 START TEST xnvme_rpc 00:14:14.937 ************************************ 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72240 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72240 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72240 ']' 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.937 13:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.937 [2024-12-11 13:12:06.442068] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:14:14.937 [2024-12-11 13:12:06.442258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72240 ] 00:14:15.196 [2024-12-11 13:12:06.626562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.456 [2024-12-11 13:12:06.768352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.395 xnvme_bdev 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.395 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72240 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72240 ']' 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72240 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:16.655 13:12:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.655 13:12:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72240 00:14:16.655 13:12:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.655 13:12:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.655 killing process with pid 72240 00:14:16.655 13:12:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72240' 00:14:16.655 13:12:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72240 00:14:16.655 13:12:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72240 00:14:19.192 00:14:19.192 real 0m4.287s 00:14:19.192 user 0m4.159s 00:14:19.192 sys 0m0.735s 00:14:19.192 13:12:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.192 13:12:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.192 ************************************ 00:14:19.192 END TEST xnvme_rpc 00:14:19.192 ************************************ 00:14:19.192 13:12:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:19.192 13:12:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.192 13:12:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.192 13:12:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.192 ************************************ 00:14:19.192 START TEST xnvme_bdevperf 00:14:19.192 ************************************ 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:19.192 13:12:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.192 { 00:14:19.192 "subsystems": [ 00:14:19.192 { 00:14:19.192 "subsystem": "bdev", 00:14:19.192 "config": [ 00:14:19.192 { 00:14:19.192 "params": { 00:14:19.192 "io_mechanism": "libaio", 00:14:19.192 "conserve_cpu": true, 00:14:19.192 "filename": "/dev/nvme0n1", 00:14:19.192 "name": "xnvme_bdev" 00:14:19.192 }, 00:14:19.192 "method": "bdev_xnvme_create" 00:14:19.192 }, 00:14:19.192 { 00:14:19.192 "method": "bdev_wait_for_examine" 00:14:19.192 } 00:14:19.192 ] 00:14:19.192 } 00:14:19.192 ] 00:14:19.192 } 00:14:19.452 [2024-12-11 13:12:10.790497] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:14:19.452 [2024-12-11 13:12:10.790643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72326 ] 00:14:19.452 [2024-12-11 13:12:10.975229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.711 [2024-12-11 13:12:11.113333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.279 Running I/O for 5 seconds... 00:14:22.147 41072.00 IOPS, 160.44 MiB/s [2024-12-11T13:12:14.650Z] 40838.50 IOPS, 159.53 MiB/s [2024-12-11T13:12:15.585Z] 40794.33 IOPS, 159.35 MiB/s [2024-12-11T13:12:16.958Z] 40824.25 IOPS, 159.47 MiB/s 00:14:25.390 Latency(us) 00:14:25.390 [2024-12-11T13:12:16.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.390 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:25.390 xnvme_bdev : 5.00 40986.10 160.10 0.00 0.00 1558.06 335.58 9211.89 00:14:25.390 [2024-12-11T13:12:16.958Z] =================================================================================================================== 00:14:25.390 [2024-12-11T13:12:16.958Z] Total : 40986.10 160.10 0.00 0.00 1558.06 335.58 9211.89 00:14:26.327 13:12:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.327 13:12:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:26.327 13:12:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:26.327 13:12:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:26.327 13:12:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.327 { 00:14:26.327 "subsystems": [ 00:14:26.327 { 00:14:26.327 "subsystem": "bdev", 00:14:26.327 "config": [ 00:14:26.327 { 00:14:26.327 "params": { 00:14:26.327 "io_mechanism": "libaio", 00:14:26.327 "conserve_cpu": true, 00:14:26.327 "filename": "/dev/nvme0n1", 00:14:26.327 "name": "xnvme_bdev" 00:14:26.327 }, 00:14:26.327 "method": "bdev_xnvme_create" 00:14:26.327 }, 00:14:26.327 { 00:14:26.327 "method": "bdev_wait_for_examine" 00:14:26.327 } 00:14:26.327 ] 00:14:26.327 } 00:14:26.327 ] 00:14:26.327 } 00:14:26.327 [2024-12-11 13:12:17.854041] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:14:26.327 [2024-12-11 13:12:17.854218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72407 ] 00:14:26.586 [2024-12-11 13:12:18.021763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.844 [2024-12-11 13:12:18.156027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.103 Running I/O for 5 seconds... 00:14:29.414 42561.00 IOPS, 166.25 MiB/s [2024-12-11T13:12:21.915Z] 42128.00 IOPS, 164.56 MiB/s [2024-12-11T13:12:22.850Z] 42092.33 IOPS, 164.42 MiB/s [2024-12-11T13:12:23.784Z] 41618.25 IOPS, 162.57 MiB/s [2024-12-11T13:12:23.784Z] 41573.40 IOPS, 162.40 MiB/s 00:14:32.216 Latency(us) 00:14:32.216 [2024-12-11T13:12:23.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.216 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:32.216 xnvme_bdev : 5.00 41550.17 162.31 0.00 0.00 1536.71 292.81 4948.10 00:14:32.216 [2024-12-11T13:12:23.784Z] =================================================================================================================== 00:14:32.216 [2024-12-11T13:12:23.784Z] Total : 41550.17 162.31 0.00 0.00 1536.71 292.81 4948.10 00:14:33.611 00:14:33.611 real 0m14.143s 00:14:33.611 user 0m5.175s 00:14:33.611 sys 0m6.065s 00:14:33.611 13:12:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.611 ************************************ 00:14:33.611 END TEST xnvme_bdevperf 00:14:33.611 ************************************ 00:14:33.611 13:12:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.611 13:12:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:33.611 13:12:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:33.611 13:12:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.611 13:12:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.611 ************************************ 00:14:33.611 START TEST xnvme_fio_plugin 00:14:33.611 ************************************ 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:33.611 { 00:14:33.611 "subsystems": [ 00:14:33.611 { 00:14:33.611 "subsystem": "bdev", 00:14:33.611 "config": [ 00:14:33.611 { 00:14:33.611 "params": { 00:14:33.611 "io_mechanism": "libaio", 00:14:33.611 "conserve_cpu": true, 00:14:33.611 "filename": "/dev/nvme0n1", 00:14:33.611 "name": "xnvme_bdev" 00:14:33.611 }, 00:14:33.611 "method": "bdev_xnvme_create" 00:14:33.611 }, 00:14:33.611 { 00:14:33.611 "method": "bdev_wait_for_examine" 00:14:33.611 } 00:14:33.611 ] 00:14:33.611 } 00:14:33.611 ] 00:14:33.611 } 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.611 13:12:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.611 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:33.611 fio-3.35 00:14:33.611 Starting 1 thread 00:14:40.173 00:14:40.173 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72536: Wed Dec 11 13:12:31 2024 00:14:40.173 read: IOPS=42.6k, BW=167MiB/s (175MB/s)(833MiB/5001msec) 00:14:40.173 slat (usec): min=4, max=1453, avg=20.62, stdev=23.79 00:14:40.173 clat (usec): min=59, max=6384, avg=879.35, stdev=545.54 00:14:40.173 lat (usec): min=101, max=6475, avg=899.97, stdev=549.33 00:14:40.173 clat percentiles (usec): 00:14:40.173 | 1.00th=[ 174], 5.00th=[ 251], 10.00th=[ 326], 20.00th=[ 453], 00:14:40.173 | 30.00th=[ 578], 40.00th=[ 693], 50.00th=[ 807], 60.00th=[ 922], 00:14:40.173 | 70.00th=[ 1045], 80.00th=[ 1188], 90.00th=[ 1401], 95.00th=[ 1663], 00:14:40.173 | 99.00th=[ 3195], 99.50th=[ 3752], 99.90th=[ 4424], 99.95th=[ 4752], 00:14:40.173 | 99.99th=[ 5080] 00:14:40.173 bw ( KiB/s): min=161856, max=179376, per=100.00%, avg=171151.11, stdev=5856.61, samples=9 00:14:40.173 iops : min=40464, max=44844, avg=42787.78, stdev=1464.15, samples=9 00:14:40.173 lat (usec) : 100=0.03%, 250=4.97%, 500=18.86%, 750=21.30%, 1000=21.36% 00:14:40.173 lat (msec) : 2=30.17%, 4=2.97%, 10=0.34% 00:14:40.173 cpu : usr=25.00%, sys=51.72%, ctx=118, majf=0, minf=764 00:14:40.173 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=11.2%, 16=26.0%, 32=55.4%, >=64=1.8% 00:14:40.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.173 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:40.173 issued rwts: total=213179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:40.173 00:14:40.173 Run status group 0 (all jobs): 00:14:40.173 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=833MiB (873MB), run=5001-5001msec 00:14:41.111 ----------------------------------------------------- 00:14:41.111 Suppressions used: 00:14:41.111 count bytes template 00:14:41.111 1 11 /usr/src/fio/parse.c 00:14:41.111 1 8 libtcmalloc_minimal.so 00:14:41.111 1 904 libcrypto.so 00:14:41.111 ----------------------------------------------------- 00:14:41.111 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:41.111 { 00:14:41.111 "subsystems": [ 00:14:41.111 { 00:14:41.111 "subsystem": "bdev", 00:14:41.111 "config": [ 00:14:41.111 { 00:14:41.111 "params": { 00:14:41.111 "io_mechanism": "libaio", 00:14:41.111 "conserve_cpu": true, 00:14:41.111 "filename": "/dev/nvme0n1", 00:14:41.111 "name": "xnvme_bdev" 00:14:41.111 }, 00:14:41.111 "method": "bdev_xnvme_create" 00:14:41.111 }, 00:14:41.111 { 00:14:41.111 "method": "bdev_wait_for_examine" 00:14:41.111 } 00:14:41.111 ] 00:14:41.111 } 00:14:41.111 ] 00:14:41.111 } 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:41.111 13:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.371 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:41.371 fio-3.35 00:14:41.371 Starting 1 thread 00:14:47.954 00:14:47.954 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72629: Wed Dec 11 13:12:38 2024 00:14:47.954 write: IOPS=45.1k, BW=176MiB/s (185MB/s)(881MiB/5001msec); 0 zone resets 00:14:47.954 slat (usec): min=4, max=3252, avg=19.43, stdev=24.02 00:14:47.954 clat (usec): min=93, max=5220, avg=837.17, stdev=526.03 00:14:47.954 lat (usec): min=141, max=5339, avg=856.60, stdev=530.19 00:14:47.954 clat percentiles (usec): 00:14:47.954 | 1.00th=[ 178], 5.00th=[ 253], 10.00th=[ 322], 20.00th=[ 441], 00:14:47.954 | 30.00th=[ 553], 40.00th=[ 660], 50.00th=[ 758], 60.00th=[ 865], 00:14:47.954 | 70.00th=[ 979], 80.00th=[ 1106], 90.00th=[ 1319], 95.00th=[ 1663], 00:14:47.954 | 99.00th=[ 3064], 99.50th=[ 3687], 99.90th=[ 4490], 99.95th=[ 4686], 00:14:47.954 | 99.99th=[ 5014] 00:14:47.954 bw ( KiB/s): min=161200, max=199688, per=99.39%, avg=179189.33, stdev=14115.06, samples=9 00:14:47.954 iops : min=40300, max=49922, avg=44797.33, stdev=3528.76, samples=9 00:14:47.954 lat (usec) : 100=0.02%, 250=4.83%, 500=20.40%, 750=23.73%, 1000=22.89% 00:14:47.954 lat (msec) : 2=24.82%, 4=2.98%, 10=0.32% 00:14:47.954 cpu : usr=26.24%, sys=52.12%, ctx=83, majf=0, minf=765 00:14:47.954 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=10.8%, 16=26.1%, 32=56.2%, >=64=1.8% 00:14:47.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.954 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:47.954 issued rwts: total=0,225412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:47.954 00:14:47.954 Run status group 0 (all jobs): 00:14:47.954 WRITE: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=881MiB (923MB), run=5001-5001msec 00:14:48.522 ----------------------------------------------------- 00:14:48.522 Suppressions used: 00:14:48.522 count bytes template 00:14:48.522 1 11 /usr/src/fio/parse.c 00:14:48.522 1 8 libtcmalloc_minimal.so 00:14:48.522 1 904 libcrypto.so 00:14:48.522 ----------------------------------------------------- 00:14:48.522 00:14:48.522 00:14:48.522 real 0m15.179s 00:14:48.522 user 0m6.450s 00:14:48.522 sys 0m6.150s 00:14:48.522 13:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.522 13:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:48.522 ************************************ 00:14:48.522 END TEST xnvme_fio_plugin 00:14:48.522 ************************************ 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:48.782 13:12:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:48.782 13:12:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.782 13:12:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.782 13:12:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.782 ************************************ 00:14:48.782 START TEST xnvme_rpc 00:14:48.782 ************************************ 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72721 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72721 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72721 ']' 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.782 13:12:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.782 [2024-12-11 13:12:40.276881] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:14:48.782 [2024-12-11 13:12:40.277698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72721 ] 00:14:49.041 [2024-12-11 13:12:40.478627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.299 [2024-12-11 13:12:40.612563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 xnvme_bdev 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72721 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72721 ']' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72721 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.235 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72721 00:14:50.494 killing process with pid 72721 00:14:50.494 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.494 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.494 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72721' 00:14:50.494 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72721 00:14:50.494 13:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72721 00:14:53.025 00:14:53.026 real 0m4.225s 00:14:53.026 user 0m4.105s 00:14:53.026 sys 0m0.735s 00:14:53.026 13:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.026 ************************************ 00:14:53.026 END TEST xnvme_rpc 00:14:53.026 ************************************ 00:14:53.026 13:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.026 13:12:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:53.026 13:12:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.026 13:12:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.026 13:12:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.026 ************************************ 00:14:53.026 START TEST xnvme_bdevperf 00:14:53.026 ************************************ 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:53.026 13:12:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.026 { 00:14:53.026 "subsystems": [ 00:14:53.026 { 00:14:53.026 "subsystem": "bdev", 00:14:53.026 "config": [ 00:14:53.026 { 00:14:53.026 "params": { 00:14:53.026 "io_mechanism": "io_uring", 00:14:53.026 "conserve_cpu": false, 00:14:53.026 "filename": "/dev/nvme0n1", 00:14:53.026 "name": "xnvme_bdev" 00:14:53.026 }, 00:14:53.026 "method": "bdev_xnvme_create" 00:14:53.026 }, 00:14:53.026 { 00:14:53.026 "method": "bdev_wait_for_examine" 00:14:53.026 } 00:14:53.026 ] 00:14:53.026 } 00:14:53.026 ] 00:14:53.026 } 00:14:53.026 [2024-12-11 13:12:44.564606] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:14:53.026 [2024-12-11 13:12:44.564728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72807 ] 00:14:53.284 [2024-12-11 13:12:44.748489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.543 [2024-12-11 13:12:44.874869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.800 Running I/O for 5 seconds... 00:14:56.112 32896.00 IOPS, 128.50 MiB/s [2024-12-11T13:12:48.617Z] 35712.00 IOPS, 139.50 MiB/s [2024-12-11T13:12:49.592Z] 32213.33 IOPS, 125.83 MiB/s [2024-12-11T13:12:50.527Z] 31743.75 IOPS, 124.00 MiB/s [2024-12-11T13:12:50.527Z] 33113.40 IOPS, 129.35 MiB/s 00:14:58.959 Latency(us) 00:14:58.959 [2024-12-11T13:12:50.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.959 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:58.959 xnvme_bdev : 5.01 33075.13 129.20 0.00 0.00 1928.85 759.98 8106.46 00:14:58.959 [2024-12-11T13:12:50.527Z] =================================================================================================================== 00:14:58.959 [2024-12-11T13:12:50.527Z] Total : 33075.13 129.20 0.00 0.00 1928.85 759.98 8106.46 00:15:00.337 13:12:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:00.337 13:12:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:00.337 13:12:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:00.337 13:12:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:00.337 13:12:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:00.337 { 00:15:00.337 "subsystems": [ 00:15:00.337 { 00:15:00.337 "subsystem": "bdev", 00:15:00.337 "config": [ 00:15:00.337 { 00:15:00.337 "params": { 00:15:00.337 "io_mechanism": "io_uring", 00:15:00.337 "conserve_cpu": false, 00:15:00.337 "filename": "/dev/nvme0n1", 00:15:00.337 "name": "xnvme_bdev" 00:15:00.337 }, 00:15:00.337 "method": "bdev_xnvme_create" 00:15:00.337 }, 00:15:00.337 { 00:15:00.337 "method": "bdev_wait_for_examine" 00:15:00.337 } 00:15:00.337 ] 00:15:00.337 } 00:15:00.337 ] 00:15:00.337 } 00:15:00.337 [2024-12-11 13:12:51.602847] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:15:00.337 [2024-12-11 13:12:51.603224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72888 ] 00:15:00.337 [2024-12-11 13:12:51.790952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.597 [2024-12-11 13:12:51.928435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.856 Running I/O for 5 seconds... 00:15:03.174 26548.00 IOPS, 103.70 MiB/s [2024-12-11T13:12:55.677Z] 25082.00 IOPS, 97.98 MiB/s [2024-12-11T13:12:56.614Z] 24038.67 IOPS, 93.90 MiB/s [2024-12-11T13:12:57.553Z] 23853.00 IOPS, 93.18 MiB/s 00:15:05.985 Latency(us) 00:15:05.985 [2024-12-11T13:12:57.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.985 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:05.985 xnvme_bdev : 5.00 23721.68 92.66 0.00 0.00 2689.76 1309.40 8106.46 00:15:05.985 [2024-12-11T13:12:57.553Z] =================================================================================================================== 00:15:05.985 [2024-12-11T13:12:57.553Z] Total : 23721.68 92.66 0.00 0.00 2689.76 1309.40 8106.46 00:15:07.363 00:15:07.363 real 0m14.080s 00:15:07.363 user 0m7.044s 00:15:07.363 sys 0m6.800s 00:15:07.363 13:12:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.363 13:12:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:07.363 ************************************ 00:15:07.363 END TEST xnvme_bdevperf 00:15:07.363 ************************************ 00:15:07.363 13:12:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:07.364 13:12:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:07.364 13:12:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.364 13:12:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.364 ************************************ 00:15:07.364 START TEST xnvme_fio_plugin 00:15:07.364 ************************************ 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.364 13:12:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.364 { 00:15:07.364 "subsystems": [ 00:15:07.364 { 00:15:07.364 "subsystem": "bdev", 00:15:07.364 "config": [ 00:15:07.364 { 00:15:07.364 "params": { 00:15:07.364 "io_mechanism": "io_uring", 00:15:07.364 "conserve_cpu": false, 00:15:07.364 "filename": "/dev/nvme0n1", 00:15:07.364 "name": "xnvme_bdev" 00:15:07.364 }, 00:15:07.364 "method": "bdev_xnvme_create" 00:15:07.364 }, 00:15:07.364 { 00:15:07.364 "method": "bdev_wait_for_examine" 00:15:07.364 } 00:15:07.364 ] 00:15:07.364 } 00:15:07.364 ] 00:15:07.364 } 00:15:07.364 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:07.364 fio-3.35 00:15:07.364 Starting 1 thread 00:15:13.931 00:15:13.931 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73013: Wed Dec 11 13:13:04 2024 00:15:13.931 read: IOPS=28.5k, BW=111MiB/s (117MB/s)(557MiB/5002msec) 00:15:13.931 slat (usec): min=3, max=145, avg= 5.68, stdev= 2.42 00:15:13.931 clat (usec): min=1405, max=3010, avg=2018.80, stdev=277.37 00:15:13.931 lat (usec): min=1409, max=3024, avg=2024.49, stdev=278.55 00:15:13.931 clat percentiles (usec): 00:15:13.931 | 1.00th=[ 1549], 5.00th=[ 1631], 10.00th=[ 1680], 20.00th=[ 1762], 00:15:13.931 | 30.00th=[ 1844], 40.00th=[ 1909], 50.00th=[ 1975], 60.00th=[ 2057], 00:15:13.931 | 70.00th=[ 2147], 80.00th=[ 2245], 90.00th=[ 2409], 95.00th=[ 2540], 00:15:13.931 | 99.00th=[ 2737], 99.50th=[ 2802], 99.90th=[ 2900], 99.95th=[ 2933], 00:15:13.931 | 99.99th=[ 2966] 00:15:13.931 bw ( KiB/s): min=100352, max=126976, per=100.00%, avg=114289.78, stdev=8400.46, samples=9 00:15:13.931 iops : min=25088, max=31744, avg=28572.44, stdev=2100.11, samples=9 00:15:13.931 lat (msec) : 2=52.44%, 4=47.56% 00:15:13.931 cpu : usr=30.93%, sys=67.99%, ctx=10, majf=0, minf=762 00:15:13.931 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:13.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.931 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:13.931 issued rwts: total=142656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:13.931 00:15:13.931 Run status group 0 (all jobs): 00:15:13.931 READ: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=557MiB (584MB), run=5002-5002msec 00:15:14.924 ----------------------------------------------------- 00:15:14.924 Suppressions used: 00:15:14.924 count bytes template 00:15:14.924 1 11 /usr/src/fio/parse.c 00:15:14.924 1 8 libtcmalloc_minimal.so 00:15:14.924 1 904 libcrypto.so 00:15:14.924 ----------------------------------------------------- 00:15:14.924 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.924 { 00:15:14.924 "subsystems": [ 00:15:14.924 { 00:15:14.924 "subsystem": "bdev", 00:15:14.924 "config": [ 00:15:14.924 { 00:15:14.924 "params": { 00:15:14.924 "io_mechanism": "io_uring", 00:15:14.924 "conserve_cpu": false, 00:15:14.924 "filename": "/dev/nvme0n1", 00:15:14.924 "name": "xnvme_bdev" 00:15:14.924 }, 00:15:14.924 "method": "bdev_xnvme_create" 00:15:14.924 }, 00:15:14.924 { 00:15:14.924 "method": "bdev_wait_for_examine" 00:15:14.924 } 00:15:14.924 ] 00:15:14.924 } 00:15:14.924 ] 00:15:14.924 } 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.924 13:13:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.924 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:14.924 fio-3.35 00:15:14.924 Starting 1 thread 00:15:21.495 00:15:21.495 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73110: Wed Dec 11 13:13:12 2024 00:15:21.495 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(549MiB/5001msec); 0 zone resets 00:15:21.495 slat (usec): min=2, max=113, avg= 5.77, stdev= 2.46 00:15:21.495 clat (usec): min=1193, max=3607, avg=2046.34, stdev=303.95 00:15:21.495 lat (usec): min=1196, max=3617, avg=2052.11, stdev=305.08 00:15:21.495 clat percentiles (usec): 00:15:21.495 | 1.00th=[ 1483], 5.00th=[ 1598], 10.00th=[ 1680], 20.00th=[ 1778], 00:15:21.495 | 30.00th=[ 1860], 40.00th=[ 1942], 50.00th=[ 2024], 60.00th=[ 2114], 00:15:21.495 | 70.00th=[ 2212], 80.00th=[ 2311], 90.00th=[ 2442], 95.00th=[ 2573], 00:15:21.495 | 99.00th=[ 2802], 99.50th=[ 2900], 99.90th=[ 3228], 99.95th=[ 3326], 00:15:21.495 | 99.99th=[ 3523] 00:15:21.495 bw ( KiB/s): min=101376, max=123904, per=100.00%, avg=113208.89, stdev=6885.61, samples=9 00:15:21.495 iops : min=25344, max=30976, avg=28302.22, stdev=1721.40, samples=9 00:15:21.495 lat (msec) : 2=47.37%, 4=52.63% 00:15:21.495 cpu : usr=31.02%, sys=67.88%, ctx=13, majf=0, minf=763 00:15:21.495 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:21.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.495 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:21.495 issued rwts: total=0,140608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:21.495 00:15:21.495 Run status group 0 (all jobs): 00:15:21.495 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=549MiB (576MB), run=5001-5001msec 00:15:22.433 ----------------------------------------------------- 00:15:22.433 Suppressions used: 00:15:22.433 count bytes template 00:15:22.433 1 11 /usr/src/fio/parse.c 00:15:22.434 1 8 libtcmalloc_minimal.so 00:15:22.434 1 904 libcrypto.so 00:15:22.434 ----------------------------------------------------- 00:15:22.434 00:15:22.434 00:15:22.434 real 0m15.055s 00:15:22.434 user 0m6.995s 00:15:22.434 sys 0m7.691s 00:15:22.434 13:13:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.434 13:13:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:22.434 ************************************ 00:15:22.434 END TEST xnvme_fio_plugin 00:15:22.434 ************************************ 00:15:22.434 13:13:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:22.434 13:13:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:22.434 13:13:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:22.434 13:13:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:22.434 13:13:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:22.434 13:13:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.434 13:13:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.434 ************************************ 00:15:22.434 START TEST xnvme_rpc 00:15:22.434 ************************************ 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73196 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73196 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73196 ']' 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.434 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.434 [2024-12-11 13:13:13.866499] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:15:22.434 [2024-12-11 13:13:13.866890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73196 ] 00:15:22.693 [2024-12-11 13:13:14.053132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.693 [2024-12-11 13:13:14.180026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.631 xnvme_bdev 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:23.631 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73196 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73196 ']' 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73196 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73196 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.890 killing process with pid 73196 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73196' 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73196 00:15:23.890 13:13:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73196 00:15:26.425 ************************************ 00:15:26.425 END TEST xnvme_rpc 00:15:26.425 ************************************ 00:15:26.425 00:15:26.425 real 0m4.215s 00:15:26.425 user 0m4.105s 00:15:26.425 sys 0m0.725s 00:15:26.425 13:13:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.425 13:13:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.684 13:13:18 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:26.684 13:13:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:26.684 13:13:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.684 13:13:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.684 ************************************ 00:15:26.684 START TEST xnvme_bdevperf 00:15:26.684 ************************************ 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:26.684 13:13:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:26.684 { 00:15:26.684 "subsystems": [ 00:15:26.684 { 00:15:26.684 "subsystem": "bdev", 00:15:26.684 "config": [ 00:15:26.684 { 00:15:26.684 "params": { 00:15:26.684 "io_mechanism": "io_uring", 00:15:26.684 "conserve_cpu": true, 00:15:26.684 "filename": "/dev/nvme0n1", 00:15:26.684 "name": "xnvme_bdev" 00:15:26.684 }, 00:15:26.684 "method": "bdev_xnvme_create" 00:15:26.684 }, 00:15:26.684 { 00:15:26.684 "method": "bdev_wait_for_examine" 00:15:26.684 } 00:15:26.684 ] 00:15:26.684 } 00:15:26.685 ] 00:15:26.685 } 00:15:26.685 [2024-12-11 13:13:18.146475] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:15:26.685 [2024-12-11 13:13:18.146622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73282 ] 00:15:26.944 [2024-12-11 13:13:18.335039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.944 [2024-12-11 13:13:18.476432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.513 Running I/O for 5 seconds... 00:15:29.389 24192.00 IOPS, 94.50 MiB/s [2024-12-11T13:13:22.337Z] 26080.00 IOPS, 101.88 MiB/s [2024-12-11T13:13:22.905Z] 25013.33 IOPS, 97.71 MiB/s [2024-12-11T13:13:24.284Z] 25208.00 IOPS, 98.47 MiB/s 00:15:32.716 Latency(us) 00:15:32.716 [2024-12-11T13:13:24.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.716 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:32.716 xnvme_bdev : 5.00 25103.61 98.06 0.00 0.00 2542.30 1342.30 8106.46 00:15:32.716 [2024-12-11T13:13:24.284Z] =================================================================================================================== 00:15:32.716 [2024-12-11T13:13:24.284Z] Total : 25103.61 98.06 0.00 0.00 2542.30 1342.30 8106.46 00:15:33.677 13:13:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:33.677 13:13:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:33.677 13:13:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:33.677 13:13:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:33.677 13:13:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:33.677 { 00:15:33.677 "subsystems": [ 00:15:33.677 { 00:15:33.677 "subsystem": "bdev", 00:15:33.677 "config": [ 00:15:33.677 { 00:15:33.677 "params": { 00:15:33.677 "io_mechanism": "io_uring", 00:15:33.677 "conserve_cpu": true, 00:15:33.677 "filename": "/dev/nvme0n1", 00:15:33.677 "name": "xnvme_bdev" 00:15:33.677 }, 00:15:33.677 "method": "bdev_xnvme_create" 00:15:33.677 }, 00:15:33.677 { 00:15:33.677 "method": "bdev_wait_for_examine" 00:15:33.677 } 00:15:33.677 ] 00:15:33.677 } 00:15:33.677 ] 00:15:33.677 } 00:15:33.677 [2024-12-11 13:13:25.201687] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:15:33.677 [2024-12-11 13:13:25.201824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73363 ] 00:15:33.937 [2024-12-11 13:13:25.387172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.195 [2024-12-11 13:13:25.520815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.454 Running I/O for 5 seconds... 00:15:36.769 27968.00 IOPS, 109.25 MiB/s [2024-12-11T13:13:29.274Z] 27072.00 IOPS, 105.75 MiB/s [2024-12-11T13:13:30.212Z] 25621.33 IOPS, 100.08 MiB/s [2024-12-11T13:13:31.157Z] 24928.00 IOPS, 97.38 MiB/s [2024-12-11T13:13:31.157Z] 24627.20 IOPS, 96.20 MiB/s 00:15:39.589 Latency(us) 00:15:39.589 [2024-12-11T13:13:31.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.589 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:39.589 xnvme_bdev : 5.01 24576.47 96.00 0.00 0.00 2596.39 1309.40 8211.74 00:15:39.589 [2024-12-11T13:13:31.157Z] =================================================================================================================== 00:15:39.589 [2024-12-11T13:13:31.157Z] Total : 24576.47 96.00 0.00 0.00 2596.39 1309.40 8211.74 00:15:40.964 00:15:40.964 real 0m14.101s 00:15:40.964 user 0m8.087s 00:15:40.964 sys 0m5.476s 00:15:40.964 13:13:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.964 13:13:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:40.964 ************************************ 00:15:40.964 END TEST xnvme_bdevperf 00:15:40.964 ************************************ 00:15:40.964 13:13:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:40.964 13:13:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.964 13:13:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.964 13:13:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.964 ************************************ 00:15:40.964 START TEST xnvme_fio_plugin 00:15:40.964 ************************************ 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:40.964 { 00:15:40.964 "subsystems": [ 00:15:40.964 { 00:15:40.964 "subsystem": "bdev", 00:15:40.964 "config": [ 00:15:40.964 { 00:15:40.964 "params": { 00:15:40.964 "io_mechanism": "io_uring", 00:15:40.964 "conserve_cpu": true, 00:15:40.964 "filename": "/dev/nvme0n1", 00:15:40.964 "name": "xnvme_bdev" 00:15:40.964 }, 00:15:40.964 "method": "bdev_xnvme_create" 00:15:40.964 }, 00:15:40.964 { 00:15:40.964 "method": "bdev_wait_for_examine" 00:15:40.964 } 00:15:40.964 ] 00:15:40.964 } 00:15:40.964 ] 00:15:40.964 } 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:40.964 13:13:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.964 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:40.964 fio-3.35 00:15:40.964 Starting 1 thread 00:15:47.536 00:15:47.536 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73488: Wed Dec 11 13:13:38 2024 00:15:47.536 read: IOPS=23.4k, BW=91.4MiB/s (95.8MB/s)(457MiB/5001msec) 00:15:47.536 slat (usec): min=3, max=147, avg= 7.57, stdev= 3.48 00:15:47.536 clat (usec): min=1464, max=3889, avg=2436.21, stdev=288.34 00:15:47.536 lat (usec): min=1468, max=3902, avg=2443.78, stdev=289.43 00:15:47.536 clat percentiles (usec): 00:15:47.536 | 1.00th=[ 1745], 5.00th=[ 1942], 10.00th=[ 2057], 20.00th=[ 2212], 00:15:47.536 | 30.00th=[ 2278], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:15:47.536 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2802], 95.00th=[ 2900], 00:15:47.536 | 99.00th=[ 3032], 99.50th=[ 3097], 99.90th=[ 3523], 99.95th=[ 3654], 00:15:47.536 | 99.99th=[ 3785] 00:15:47.536 bw ( KiB/s): min=88320, max=103936, per=100.00%, avg=94065.78, stdev=5516.06, samples=9 00:15:47.536 iops : min=22080, max=25984, avg=23516.44, stdev=1379.01, samples=9 00:15:47.536 lat (msec) : 2=7.42%, 4=92.58% 00:15:47.536 cpu : usr=48.25%, sys=47.29%, ctx=26, majf=0, minf=762 00:15:47.536 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.536 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:47.536 issued rwts: total=117024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.536 00:15:47.536 Run status group 0 (all jobs): 00:15:47.536 READ: bw=91.4MiB/s (95.8MB/s), 91.4MiB/s-91.4MiB/s (95.8MB/s-95.8MB/s), io=457MiB (479MB), run=5001-5001msec 00:15:48.473 ----------------------------------------------------- 00:15:48.473 Suppressions used: 00:15:48.473 count bytes template 00:15:48.473 1 11 /usr/src/fio/parse.c 00:15:48.473 1 8 libtcmalloc_minimal.so 00:15:48.473 1 904 libcrypto.so 00:15:48.473 ----------------------------------------------------- 00:15:48.473 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:48.473 { 00:15:48.473 "subsystems": [ 00:15:48.473 { 00:15:48.473 "subsystem": "bdev", 00:15:48.473 "config": [ 00:15:48.473 { 00:15:48.473 "params": { 00:15:48.473 "io_mechanism": "io_uring", 00:15:48.473 "conserve_cpu": true, 00:15:48.473 "filename": "/dev/nvme0n1", 00:15:48.473 "name": "xnvme_bdev" 00:15:48.473 }, 00:15:48.473 "method": "bdev_xnvme_create" 00:15:48.473 }, 00:15:48.473 { 00:15:48.473 "method": "bdev_wait_for_examine" 00:15:48.473 } 00:15:48.473 ] 00:15:48.473 } 00:15:48.473 ] 00:15:48.473 } 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.473 13:13:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:48.473 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:48.473 fio-3.35 00:15:48.473 Starting 1 thread 00:15:55.063 00:15:55.063 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73584: Wed Dec 11 13:13:45 2024 00:15:55.063 write: IOPS=22.4k, BW=87.6MiB/s (91.8MB/s)(438MiB/5001msec); 0 zone resets 00:15:55.063 slat (usec): min=2, max=111, avg= 8.05, stdev= 3.76 00:15:55.063 clat (usec): min=1529, max=3535, avg=2533.11, stdev=266.07 00:15:55.063 lat (usec): min=1534, max=3549, avg=2541.16, stdev=267.09 00:15:55.063 clat percentiles (usec): 00:15:55.063 | 1.00th=[ 1827], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2311], 00:15:55.063 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:15:55.063 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 2868], 95.00th=[ 2933], 00:15:55.063 | 99.00th=[ 3064], 99.50th=[ 3130], 99.90th=[ 3228], 99.95th=[ 3261], 00:15:55.063 | 99.99th=[ 3458] 00:15:55.063 bw ( KiB/s): min=84992, max=96768, per=99.82%, avg=89523.44, stdev=3259.86, samples=9 00:15:55.063 iops : min=21248, max=24192, avg=22380.78, stdev=815.00, samples=9 00:15:55.063 lat (msec) : 2=3.08%, 4=96.92% 00:15:55.063 cpu : usr=49.28%, sys=46.36%, ctx=17, majf=0, minf=763 00:15:55.063 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:55.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.063 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:55.063 issued rwts: total=0,112128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.063 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:55.063 00:15:55.063 Run status group 0 (all jobs): 00:15:55.063 WRITE: bw=87.6MiB/s (91.8MB/s), 87.6MiB/s-87.6MiB/s (91.8MB/s-91.8MB/s), io=438MiB (459MB), run=5001-5001msec 00:15:56.011 ----------------------------------------------------- 00:15:56.011 Suppressions used: 00:15:56.011 count bytes template 00:15:56.011 1 11 /usr/src/fio/parse.c 00:15:56.011 1 8 libtcmalloc_minimal.so 00:15:56.011 1 904 libcrypto.so 00:15:56.011 ----------------------------------------------------- 00:15:56.011 00:15:56.011 ************************************ 00:15:56.011 END TEST xnvme_fio_plugin 00:15:56.011 ************************************ 00:15:56.011 00:15:56.011 real 0m15.099s 00:15:56.011 user 0m8.960s 00:15:56.011 sys 0m5.454s 00:15:56.011 13:13:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.011 13:13:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:56.011 13:13:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:56.011 13:13:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:56.011 13:13:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.011 13:13:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.011 ************************************ 00:15:56.011 START TEST xnvme_rpc 00:15:56.011 ************************************ 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73677 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73677 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73677 ']' 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.011 13:13:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.011 [2024-12-11 13:13:47.522522] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:15:56.011 [2024-12-11 13:13:47.522686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73677 ] 00:15:56.271 [2024-12-11 13:13:47.709163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.529 [2024-12-11 13:13:47.841344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.467 xnvme_bdev 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.467 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.467 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.467 13:13:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:57.467 13:13:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:57.467 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.467 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73677 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73677 ']' 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73677 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73677 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73677' 00:15:57.726 killing process with pid 73677 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73677 00:15:57.726 13:13:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73677 00:16:00.264 00:16:00.264 real 0m4.242s 00:16:00.264 user 0m4.115s 00:16:00.264 sys 0m0.747s 00:16:00.264 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.264 ************************************ 00:16:00.264 END TEST xnvme_rpc 00:16:00.264 ************************************ 00:16:00.264 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.264 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:00.264 13:13:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:00.264 13:13:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.264 13:13:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.264 ************************************ 00:16:00.264 START TEST xnvme_bdevperf 00:16:00.264 ************************************ 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:00.264 13:13:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:00.264 { 00:16:00.264 "subsystems": [ 00:16:00.264 { 00:16:00.264 "subsystem": "bdev", 00:16:00.264 "config": [ 00:16:00.264 { 00:16:00.264 "params": { 00:16:00.264 "io_mechanism": "io_uring_cmd", 00:16:00.264 "conserve_cpu": false, 00:16:00.264 "filename": "/dev/ng0n1", 00:16:00.264 "name": "xnvme_bdev" 00:16:00.264 }, 00:16:00.264 "method": "bdev_xnvme_create" 00:16:00.264 }, 00:16:00.264 { 00:16:00.264 "method": "bdev_wait_for_examine" 00:16:00.264 } 00:16:00.264 ] 00:16:00.264 } 00:16:00.264 ] 00:16:00.264 } 00:16:00.264 [2024-12-11 13:13:51.821570] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:00.264 [2024-12-11 13:13:51.821712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73762 ] 00:16:00.524 [2024-12-11 13:13:52.004815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.783 [2024-12-11 13:13:52.138030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.042 Running I/O for 5 seconds... 00:16:03.360 30208.00 IOPS, 118.00 MiB/s [2024-12-11T13:13:55.865Z] 29344.00 IOPS, 114.62 MiB/s [2024-12-11T13:13:56.804Z] 29269.00 IOPS, 114.33 MiB/s [2024-12-11T13:13:57.743Z] 27807.75 IOPS, 108.62 MiB/s 00:16:06.175 Latency(us) 00:16:06.175 [2024-12-11T13:13:57.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.175 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:06.175 xnvme_bdev : 5.01 27412.52 107.08 0.00 0.00 2327.86 1138.33 8053.82 00:16:06.175 [2024-12-11T13:13:57.743Z] =================================================================================================================== 00:16:06.175 [2024-12-11T13:13:57.743Z] Total : 27412.52 107.08 0.00 0.00 2327.86 1138.33 8053.82 00:16:07.554 13:13:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:07.554 13:13:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:07.554 13:13:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:07.554 13:13:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:07.554 13:13:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:07.554 { 00:16:07.554 "subsystems": [ 00:16:07.554 { 00:16:07.554 "subsystem": "bdev", 00:16:07.554 "config": [ 00:16:07.554 { 00:16:07.554 "params": { 00:16:07.554 "io_mechanism": "io_uring_cmd", 00:16:07.554 "conserve_cpu": false, 00:16:07.554 "filename": "/dev/ng0n1", 00:16:07.554 "name": "xnvme_bdev" 00:16:07.554 }, 00:16:07.554 "method": "bdev_xnvme_create" 00:16:07.554 }, 00:16:07.554 { 00:16:07.554 "method": "bdev_wait_for_examine" 00:16:07.554 } 00:16:07.554 ] 00:16:07.554 } 00:16:07.554 ] 00:16:07.554 } 00:16:07.554 [2024-12-11 13:13:58.857061] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:07.554 [2024-12-11 13:13:58.857213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73845 ] 00:16:07.554 [2024-12-11 13:13:59.043951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.813 [2024-12-11 13:13:59.180518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.073 Running I/O for 5 seconds... 00:16:10.023 24064.00 IOPS, 94.00 MiB/s [2024-12-11T13:14:02.969Z] 23712.00 IOPS, 92.62 MiB/s [2024-12-11T13:14:03.906Z] 24448.00 IOPS, 95.50 MiB/s [2024-12-11T13:14:04.843Z] 24752.00 IOPS, 96.69 MiB/s 00:16:13.275 Latency(us) 00:16:13.275 [2024-12-11T13:14:04.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.275 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:13.275 xnvme_bdev : 5.00 24965.29 97.52 0.00 0.00 2555.62 1487.06 8053.82 00:16:13.275 [2024-12-11T13:14:04.843Z] =================================================================================================================== 00:16:13.275 [2024-12-11T13:14:04.843Z] Total : 24965.29 97.52 0.00 0.00 2555.62 1487.06 8053.82 00:16:14.686 13:14:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:14.686 13:14:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:14.686 13:14:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:14.686 13:14:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:14.686 13:14:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:14.686 { 00:16:14.686 "subsystems": [ 00:16:14.686 { 00:16:14.686 "subsystem": "bdev", 00:16:14.686 "config": [ 00:16:14.686 { 00:16:14.686 "params": { 00:16:14.686 "io_mechanism": "io_uring_cmd", 00:16:14.686 "conserve_cpu": false, 00:16:14.686 "filename": "/dev/ng0n1", 00:16:14.686 "name": "xnvme_bdev" 00:16:14.686 }, 00:16:14.686 "method": "bdev_xnvme_create" 00:16:14.686 }, 00:16:14.686 { 00:16:14.686 "method": "bdev_wait_for_examine" 00:16:14.686 } 00:16:14.686 ] 00:16:14.686 } 00:16:14.686 ] 00:16:14.686 } 00:16:14.686 [2024-12-11 13:14:05.922137] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:14.686 [2024-12-11 13:14:05.922297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73925 ] 00:16:14.686 [2024-12-11 13:14:06.111021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.945 [2024-12-11 13:14:06.254359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.203 Running I/O for 5 seconds... 00:16:17.517 72128.00 IOPS, 281.75 MiB/s [2024-12-11T13:14:10.021Z] 72128.00 IOPS, 281.75 MiB/s [2024-12-11T13:14:10.956Z] 72107.00 IOPS, 281.67 MiB/s [2024-12-11T13:14:11.893Z] 72064.25 IOPS, 281.50 MiB/s [2024-12-11T13:14:11.893Z] 72051.40 IOPS, 281.45 MiB/s 00:16:20.325 Latency(us) 00:16:20.325 [2024-12-11T13:14:11.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.325 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:20.325 xnvme_bdev : 5.00 72040.95 281.41 0.00 0.00 885.78 434.27 2026.62 00:16:20.325 [2024-12-11T13:14:11.893Z] =================================================================================================================== 00:16:20.325 [2024-12-11T13:14:11.893Z] Total : 72040.95 281.41 0.00 0.00 885.78 434.27 2026.62 00:16:21.703 13:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:21.703 13:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:21.703 13:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:21.703 13:14:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:21.703 13:14:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:21.703 { 00:16:21.703 "subsystems": [ 00:16:21.703 { 00:16:21.703 "subsystem": "bdev", 00:16:21.703 "config": [ 00:16:21.703 { 00:16:21.703 "params": { 00:16:21.703 "io_mechanism": "io_uring_cmd", 00:16:21.703 "conserve_cpu": false, 00:16:21.703 "filename": "/dev/ng0n1", 00:16:21.703 "name": "xnvme_bdev" 00:16:21.703 }, 00:16:21.703 "method": "bdev_xnvme_create" 00:16:21.703 }, 00:16:21.703 { 00:16:21.703 "method": "bdev_wait_for_examine" 00:16:21.703 } 00:16:21.703 ] 00:16:21.703 } 00:16:21.703 ] 00:16:21.703 } 00:16:21.703 [2024-12-11 13:14:12.958666] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:21.703 [2024-12-11 13:14:12.958816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74005 ] 00:16:21.703 [2024-12-11 13:14:13.144541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.962 [2024-12-11 13:14:13.285091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.221 Running I/O for 5 seconds... 00:16:24.525 48652.00 IOPS, 190.05 MiB/s [2024-12-11T13:14:17.029Z] 49162.00 IOPS, 192.04 MiB/s [2024-12-11T13:14:17.963Z] 49071.33 IOPS, 191.68 MiB/s [2024-12-11T13:14:18.900Z] 48229.75 IOPS, 188.40 MiB/s [2024-12-11T13:14:18.900Z] 48180.80 IOPS, 188.21 MiB/s 00:16:27.332 Latency(us) 00:16:27.332 [2024-12-11T13:14:18.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.332 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:27.332 xnvme_bdev : 5.00 48162.91 188.14 0.00 0.00 1323.73 88.01 15475.97 00:16:27.332 [2024-12-11T13:14:18.900Z] =================================================================================================================== 00:16:27.332 [2024-12-11T13:14:18.900Z] Total : 48162.91 188.14 0.00 0.00 1323.73 88.01 15475.97 00:16:28.269 00:16:28.269 real 0m28.098s 00:16:28.269 user 0m14.728s 00:16:28.269 sys 0m12.918s 00:16:28.269 ************************************ 00:16:28.269 END TEST xnvme_bdevperf 00:16:28.269 ************************************ 00:16:28.269 13:14:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.269 13:14:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:28.528 13:14:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:28.528 13:14:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:28.528 13:14:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.528 13:14:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.528 ************************************ 00:16:28.528 START TEST xnvme_fio_plugin 00:16:28.528 ************************************ 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.528 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:28.529 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:28.529 { 00:16:28.529 "subsystems": [ 00:16:28.529 { 00:16:28.529 "subsystem": "bdev", 00:16:28.529 "config": [ 00:16:28.529 { 00:16:28.529 "params": { 00:16:28.529 "io_mechanism": "io_uring_cmd", 00:16:28.529 "conserve_cpu": false, 00:16:28.529 "filename": "/dev/ng0n1", 00:16:28.529 "name": "xnvme_bdev" 00:16:28.529 }, 00:16:28.529 "method": "bdev_xnvme_create" 00:16:28.529 }, 00:16:28.529 { 00:16:28.529 "method": "bdev_wait_for_examine" 00:16:28.529 } 00:16:28.529 ] 00:16:28.529 } 00:16:28.529 ] 00:16:28.529 } 00:16:28.529 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:28.529 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:28.529 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:28.529 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:28.529 13:14:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:28.788 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:28.788 fio-3.35 00:16:28.788 Starting 1 thread 00:16:35.357 00:16:35.357 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74129: Wed Dec 11 13:14:25 2024 00:16:35.357 read: IOPS=26.7k, BW=104MiB/s (109MB/s)(522MiB/5002msec) 00:16:35.357 slat (usec): min=2, max=110, avg= 7.14, stdev= 3.08 00:16:35.357 clat (usec): min=1217, max=3449, avg=2111.21, stdev=292.18 00:16:35.357 lat (usec): min=1219, max=3459, avg=2118.35, stdev=293.47 00:16:35.357 clat percentiles (usec): 00:16:35.357 | 1.00th=[ 1565], 5.00th=[ 1680], 10.00th=[ 1745], 20.00th=[ 1844], 00:16:35.357 | 30.00th=[ 1926], 40.00th=[ 2008], 50.00th=[ 2089], 60.00th=[ 2180], 00:16:35.357 | 70.00th=[ 2278], 80.00th=[ 2376], 90.00th=[ 2540], 95.00th=[ 2606], 00:16:35.357 | 99.00th=[ 2737], 99.50th=[ 2802], 99.90th=[ 2999], 99.95th=[ 3097], 00:16:35.357 | 99.99th=[ 3359] 00:16:35.357 bw ( KiB/s): min=92160, max=120832, per=99.12%, avg=105870.22, stdev=11899.95, samples=9 00:16:35.357 iops : min=23040, max=30208, avg=26467.56, stdev=2974.99, samples=9 00:16:35.357 lat (msec) : 2=39.76%, 4=60.24% 00:16:35.357 cpu : usr=36.35%, sys=62.25%, ctx=29, majf=0, minf=762 00:16:35.357 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:35.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.358 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:35.358 issued rwts: total=133561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.358 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.358 00:16:35.358 Run status group 0 (all jobs): 00:16:35.358 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=522MiB (547MB), run=5002-5002msec 00:16:35.924 ----------------------------------------------------- 00:16:35.924 Suppressions used: 00:16:35.924 count bytes template 00:16:35.924 1 11 /usr/src/fio/parse.c 00:16:35.924 1 8 libtcmalloc_minimal.so 00:16:35.924 1 904 libcrypto.so 00:16:35.924 ----------------------------------------------------- 00:16:35.924 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.924 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:35.925 { 00:16:35.925 "subsystems": [ 00:16:35.925 { 00:16:35.925 "subsystem": "bdev", 00:16:35.925 "config": [ 00:16:35.925 { 00:16:35.925 "params": { 00:16:35.925 "io_mechanism": "io_uring_cmd", 00:16:35.925 "conserve_cpu": false, 00:16:35.925 "filename": "/dev/ng0n1", 00:16:35.925 "name": "xnvme_bdev" 00:16:35.925 }, 00:16:35.925 "method": "bdev_xnvme_create" 00:16:35.925 }, 00:16:35.925 { 00:16:35.925 "method": "bdev_wait_for_examine" 00:16:35.925 } 00:16:35.925 ] 00:16:35.925 } 00:16:35.925 ] 00:16:35.925 } 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:35.925 13:14:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.197 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:36.197 fio-3.35 00:16:36.197 Starting 1 thread 00:16:42.767 00:16:42.767 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74221: Wed Dec 11 13:14:33 2024 00:16:42.767 write: IOPS=28.0k, BW=110MiB/s (115MB/s)(548MiB/5003msec); 0 zone resets 00:16:42.767 slat (usec): min=2, max=159, avg= 6.52, stdev= 2.38 00:16:42.767 clat (usec): min=86, max=20413, avg=2039.84, stdev=950.36 00:16:42.767 lat (usec): min=88, max=20417, avg=2046.37, stdev=950.44 00:16:42.767 clat percentiles (usec): 00:16:42.767 | 1.00th=[ 461], 5.00th=[ 1352], 10.00th=[ 1647], 20.00th=[ 1745], 00:16:42.767 | 30.00th=[ 1811], 40.00th=[ 1876], 50.00th=[ 1942], 60.00th=[ 2008], 00:16:42.767 | 70.00th=[ 2073], 80.00th=[ 2180], 90.00th=[ 2343], 95.00th=[ 2573], 00:16:42.767 | 99.00th=[ 5866], 99.50th=[ 8848], 99.90th=[14615], 99.95th=[16188], 00:16:42.767 | 99.99th=[17695] 00:16:42.767 bw ( KiB/s): min=95568, max=122368, per=99.97%, avg=112108.44, stdev=8374.18, samples=9 00:16:42.767 iops : min=23892, max=30592, avg=28027.11, stdev=2093.55, samples=9 00:16:42.767 lat (usec) : 100=0.01%, 250=0.37%, 500=0.73%, 750=0.77%, 1000=0.83% 00:16:42.767 lat (msec) : 2=56.45%, 4=38.84%, 10=1.61%, 20=0.39%, 50=0.01% 00:16:42.767 cpu : usr=34.25%, sys=64.61%, ctx=13, majf=0, minf=763 00:16:42.767 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.4%, 16=23.2%, 32=53.1%, >=64=2.4% 00:16:42.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.767 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.4%, >=64=0.0% 00:16:42.767 issued rwts: total=0,140266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:42.767 00:16:42.767 Run status group 0 (all jobs): 00:16:42.767 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=548MiB (575MB), run=5003-5003msec 00:16:43.337 ----------------------------------------------------- 00:16:43.337 Suppressions used: 00:16:43.337 count bytes template 00:16:43.337 1 11 /usr/src/fio/parse.c 00:16:43.337 1 8 libtcmalloc_minimal.so 00:16:43.337 1 904 libcrypto.so 00:16:43.337 ----------------------------------------------------- 00:16:43.337 00:16:43.337 00:16:43.337 real 0m14.818s 00:16:43.337 user 0m7.309s 00:16:43.337 sys 0m7.121s 00:16:43.337 ************************************ 00:16:43.337 END TEST xnvme_fio_plugin 00:16:43.337 ************************************ 00:16:43.337 13:14:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.337 13:14:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:43.337 13:14:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:43.337 13:14:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:43.337 13:14:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:43.337 13:14:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:43.337 13:14:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.337 13:14:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.337 13:14:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.337 ************************************ 00:16:43.337 START TEST xnvme_rpc 00:16:43.337 ************************************ 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=74312 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 74312 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 74312 ']' 00:16:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.337 13:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.597 [2024-12-11 13:14:34.919766] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:43.597 [2024-12-11 13:14:34.920106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74312 ] 00:16:43.597 [2024-12-11 13:14:35.109212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.856 [2024-12-11 13:14:35.256817] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.793 xnvme_bdev 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.793 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 74312 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 74312 ']' 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 74312 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74312 00:16:45.052 killing process with pid 74312 00:16:45.052 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.053 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.053 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74312' 00:16:45.053 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 74312 00:16:45.053 13:14:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 74312 00:16:47.592 00:16:47.592 real 0m4.335s 00:16:47.592 user 0m4.237s 00:16:47.592 sys 0m0.720s 00:16:47.592 13:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.592 ************************************ 00:16:47.592 END TEST xnvme_rpc 00:16:47.592 ************************************ 00:16:47.592 13:14:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.852 13:14:39 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:47.852 13:14:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:47.852 13:14:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.852 13:14:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:47.852 ************************************ 00:16:47.852 START TEST xnvme_bdevperf 00:16:47.852 ************************************ 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:47.852 13:14:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:47.852 { 00:16:47.852 "subsystems": [ 00:16:47.852 { 00:16:47.852 "subsystem": "bdev", 00:16:47.852 "config": [ 00:16:47.852 { 00:16:47.852 "params": { 00:16:47.852 "io_mechanism": "io_uring_cmd", 00:16:47.852 "conserve_cpu": true, 00:16:47.852 "filename": "/dev/ng0n1", 00:16:47.852 "name": "xnvme_bdev" 00:16:47.852 }, 00:16:47.852 "method": "bdev_xnvme_create" 00:16:47.852 }, 00:16:47.852 { 00:16:47.852 "method": "bdev_wait_for_examine" 00:16:47.852 } 00:16:47.852 ] 00:16:47.852 } 00:16:47.852 ] 00:16:47.852 } 00:16:47.852 [2024-12-11 13:14:39.303715] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:47.852 [2024-12-11 13:14:39.303860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74396 ] 00:16:48.112 [2024-12-11 13:14:39.487571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.112 [2024-12-11 13:14:39.617752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.681 Running I/O for 5 seconds... 00:16:50.558 28544.00 IOPS, 111.50 MiB/s [2024-12-11T13:14:43.065Z] 26752.00 IOPS, 104.50 MiB/s [2024-12-11T13:14:44.446Z] 26325.33 IOPS, 102.83 MiB/s [2024-12-11T13:14:45.383Z] 25776.00 IOPS, 100.69 MiB/s 00:16:53.815 Latency(us) 00:16:53.815 [2024-12-11T13:14:45.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.815 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:53.815 xnvme_bdev : 5.00 25448.73 99.41 0.00 0.00 2506.73 1079.11 8369.66 00:16:53.815 [2024-12-11T13:14:45.383Z] =================================================================================================================== 00:16:53.815 [2024-12-11T13:14:45.383Z] Total : 25448.73 99.41 0.00 0.00 2506.73 1079.11 8369.66 00:16:54.751 13:14:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:54.751 13:14:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:54.751 13:14:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:54.751 13:14:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:54.751 13:14:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:54.751 { 00:16:54.751 "subsystems": [ 00:16:54.751 { 00:16:54.751 "subsystem": "bdev", 00:16:54.751 "config": [ 00:16:54.751 { 00:16:54.751 "params": { 00:16:54.751 "io_mechanism": "io_uring_cmd", 00:16:54.751 "conserve_cpu": true, 00:16:54.751 "filename": "/dev/ng0n1", 00:16:54.751 "name": "xnvme_bdev" 00:16:54.751 }, 00:16:54.751 "method": "bdev_xnvme_create" 00:16:54.751 }, 00:16:54.751 { 00:16:54.751 "method": "bdev_wait_for_examine" 00:16:54.751 } 00:16:54.751 ] 00:16:54.751 } 00:16:54.751 ] 00:16:54.751 } 00:16:55.010 [2024-12-11 13:14:46.322544] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:16:55.010 [2024-12-11 13:14:46.322677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74478 ] 00:16:55.010 [2024-12-11 13:14:46.506064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.268 [2024-12-11 13:14:46.643786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.527 Running I/O for 5 seconds... 00:16:57.847 25344.00 IOPS, 99.00 MiB/s [2024-12-11T13:14:50.357Z] 24096.00 IOPS, 94.12 MiB/s [2024-12-11T13:14:51.296Z] 23552.00 IOPS, 92.00 MiB/s [2024-12-11T13:14:52.234Z] 23488.00 IOPS, 91.75 MiB/s [2024-12-11T13:14:52.234Z] 23347.20 IOPS, 91.20 MiB/s 00:17:00.666 Latency(us) 00:17:00.666 [2024-12-11T13:14:52.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.666 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:00.666 xnvme_bdev : 5.01 23302.05 91.02 0.00 0.00 2737.19 901.45 8317.02 00:17:00.666 [2024-12-11T13:14:52.234Z] =================================================================================================================== 00:17:00.666 [2024-12-11T13:14:52.234Z] Total : 23302.05 91.02 0.00 0.00 2737.19 901.45 8317.02 00:17:02.046 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:02.046 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:02.046 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:02.046 13:14:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:02.046 13:14:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:02.046 { 00:17:02.046 "subsystems": [ 00:17:02.046 { 00:17:02.046 "subsystem": "bdev", 00:17:02.046 "config": [ 00:17:02.046 { 00:17:02.046 "params": { 00:17:02.046 "io_mechanism": "io_uring_cmd", 00:17:02.046 "conserve_cpu": true, 00:17:02.046 "filename": "/dev/ng0n1", 00:17:02.046 "name": "xnvme_bdev" 00:17:02.046 }, 00:17:02.046 "method": "bdev_xnvme_create" 00:17:02.046 }, 00:17:02.046 { 00:17:02.046 "method": "bdev_wait_for_examine" 00:17:02.046 } 00:17:02.046 ] 00:17:02.046 } 00:17:02.046 ] 00:17:02.046 } 00:17:02.046 [2024-12-11 13:14:53.377042] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:17:02.046 [2024-12-11 13:14:53.377203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74558 ] 00:17:02.046 [2024-12-11 13:14:53.562132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.306 [2024-12-11 13:14:53.691351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.566 Running I/O for 5 seconds... 00:17:04.883 72192.00 IOPS, 282.00 MiB/s [2024-12-11T13:14:57.388Z] 72224.00 IOPS, 282.12 MiB/s [2024-12-11T13:14:58.326Z] 71594.67 IOPS, 279.67 MiB/s [2024-12-11T13:14:59.265Z] 71728.00 IOPS, 280.19 MiB/s [2024-12-11T13:14:59.265Z] 71808.00 IOPS, 280.50 MiB/s 00:17:07.697 Latency(us) 00:17:07.697 [2024-12-11T13:14:59.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.697 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:07.697 xnvme_bdev : 5.00 71795.20 280.45 0.00 0.00 888.77 657.99 2579.33 00:17:07.697 [2024-12-11T13:14:59.265Z] =================================================================================================================== 00:17:07.697 [2024-12-11T13:14:59.265Z] Total : 71795.20 280.45 0.00 0.00 888.77 657.99 2579.33 00:17:09.076 13:15:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:09.076 13:15:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:09.076 13:15:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:09.076 13:15:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:09.076 13:15:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:09.076 { 00:17:09.076 "subsystems": [ 00:17:09.076 { 00:17:09.076 "subsystem": "bdev", 00:17:09.076 "config": [ 00:17:09.076 { 00:17:09.076 "params": { 00:17:09.076 "io_mechanism": "io_uring_cmd", 00:17:09.076 "conserve_cpu": true, 00:17:09.076 "filename": "/dev/ng0n1", 00:17:09.076 "name": "xnvme_bdev" 00:17:09.076 }, 00:17:09.076 "method": "bdev_xnvme_create" 00:17:09.076 }, 00:17:09.076 { 00:17:09.076 "method": "bdev_wait_for_examine" 00:17:09.076 } 00:17:09.076 ] 00:17:09.076 } 00:17:09.076 ] 00:17:09.076 } 00:17:09.076 [2024-12-11 13:15:00.402569] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:17:09.076 [2024-12-11 13:15:00.402833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74632 ] 00:17:09.076 [2024-12-11 13:15:00.588860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.336 [2024-12-11 13:15:00.725047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.594 Running I/O for 5 seconds... 00:17:11.903 62637.00 IOPS, 244.68 MiB/s [2024-12-11T13:15:04.409Z] 42136.00 IOPS, 164.59 MiB/s [2024-12-11T13:15:05.346Z] 41707.00 IOPS, 162.92 MiB/s [2024-12-11T13:15:06.316Z] 42022.50 IOPS, 164.15 MiB/s [2024-12-11T13:15:06.316Z] 42382.80 IOPS, 165.56 MiB/s 00:17:14.748 Latency(us) 00:17:14.748 [2024-12-11T13:15:06.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.748 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:14.748 xnvme_bdev : 5.01 42321.40 165.32 0.00 0.00 1506.04 58.81 23582.43 00:17:14.748 [2024-12-11T13:15:06.316Z] =================================================================================================================== 00:17:14.748 [2024-12-11T13:15:06.316Z] Total : 42321.40 165.32 0.00 0.00 1506.04 58.81 23582.43 00:17:16.129 00:17:16.129 real 0m28.148s 00:17:16.129 user 0m17.126s 00:17:16.129 sys 0m9.128s 00:17:16.129 13:15:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.129 13:15:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:16.129 ************************************ 00:17:16.129 END TEST xnvme_bdevperf 00:17:16.129 ************************************ 00:17:16.129 13:15:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:16.129 13:15:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.129 13:15:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.129 13:15:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.129 ************************************ 00:17:16.129 START TEST xnvme_fio_plugin 00:17:16.129 ************************************ 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:16.129 { 00:17:16.129 "subsystems": [ 00:17:16.129 { 00:17:16.129 "subsystem": "bdev", 00:17:16.129 "config": [ 00:17:16.129 { 00:17:16.129 "params": { 00:17:16.129 "io_mechanism": "io_uring_cmd", 00:17:16.129 "conserve_cpu": true, 00:17:16.129 "filename": "/dev/ng0n1", 00:17:16.129 "name": "xnvme_bdev" 00:17:16.129 }, 00:17:16.129 "method": "bdev_xnvme_create" 00:17:16.129 }, 00:17:16.129 { 00:17:16.129 "method": "bdev_wait_for_examine" 00:17:16.129 } 00:17:16.129 ] 00:17:16.129 } 00:17:16.129 ] 00:17:16.129 } 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:16.129 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.388 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:16.388 fio-3.35 00:17:16.388 Starting 1 thread 00:17:22.961 00:17:22.961 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74756: Wed Dec 11 13:15:13 2024 00:17:22.961 read: IOPS=23.4k, BW=91.5MiB/s (96.0MB/s)(458MiB/5001msec) 00:17:22.961 slat (nsec): min=2507, max=73009, avg=8188.40, stdev=3621.33 00:17:22.961 clat (usec): min=1025, max=3271, avg=2404.34, stdev=302.72 00:17:22.961 lat (usec): min=1028, max=3325, avg=2412.52, stdev=303.98 00:17:22.961 clat percentiles (usec): 00:17:22.961 | 1.00th=[ 1303], 5.00th=[ 1876], 10.00th=[ 2057], 20.00th=[ 2212], 00:17:22.961 | 30.00th=[ 2278], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:17:22.961 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 2737], 95.00th=[ 2802], 00:17:22.961 | 99.00th=[ 2900], 99.50th=[ 2966], 99.90th=[ 3064], 99.95th=[ 3130], 00:17:22.961 | 99.99th=[ 3228] 00:17:22.961 bw ( KiB/s): min=87552, max=105984, per=100.00%, avg=94321.78, stdev=6013.43, samples=9 00:17:22.961 iops : min=21888, max=26496, avg=23580.44, stdev=1503.36, samples=9 00:17:22.961 lat (msec) : 2=8.15%, 4=91.85% 00:17:22.961 cpu : usr=46.10%, sys=50.08%, ctx=7, majf=0, minf=762 00:17:22.961 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:22.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.961 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:22.961 issued rwts: total=117184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.961 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:22.961 00:17:22.961 Run status group 0 (all jobs): 00:17:22.961 READ: bw=91.5MiB/s (96.0MB/s), 91.5MiB/s-91.5MiB/s (96.0MB/s-96.0MB/s), io=458MiB (480MB), run=5001-5001msec 00:17:23.531 ----------------------------------------------------- 00:17:23.531 Suppressions used: 00:17:23.531 count bytes template 00:17:23.531 1 11 /usr/src/fio/parse.c 00:17:23.531 1 8 libtcmalloc_minimal.so 00:17:23.531 1 904 libcrypto.so 00:17:23.531 ----------------------------------------------------- 00:17:23.531 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:23.531 13:15:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:23.531 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.531 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.531 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:23.531 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.531 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.531 { 00:17:23.531 "subsystems": [ 00:17:23.531 { 00:17:23.531 "subsystem": "bdev", 00:17:23.531 "config": [ 00:17:23.531 { 00:17:23.531 "params": { 00:17:23.531 "io_mechanism": "io_uring_cmd", 00:17:23.531 "conserve_cpu": true, 00:17:23.531 "filename": "/dev/ng0n1", 00:17:23.531 "name": "xnvme_bdev" 00:17:23.531 }, 00:17:23.531 "method": "bdev_xnvme_create" 00:17:23.531 }, 00:17:23.531 { 00:17:23.531 "method": "bdev_wait_for_examine" 00:17:23.531 } 00:17:23.531 ] 00:17:23.531 } 00:17:23.531 ] 00:17:23.531 } 00:17:23.789 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:23.789 fio-3.35 00:17:23.789 Starting 1 thread 00:17:30.365 00:17:30.365 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74857: Wed Dec 11 13:15:21 2024 00:17:30.365 write: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(466MiB/5002msec); 0 zone resets 00:17:30.365 slat (usec): min=4, max=1282, avg= 8.36, stdev= 4.96 00:17:30.365 clat (usec): min=1566, max=4839, avg=2348.59, stdev=234.55 00:17:30.365 lat (usec): min=1570, max=4893, avg=2356.96, stdev=235.40 00:17:30.365 clat percentiles (usec): 00:17:30.365 | 1.00th=[ 1827], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 2147], 00:17:30.365 | 30.00th=[ 2212], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2409], 00:17:30.365 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2638], 95.00th=[ 2704], 00:17:30.365 | 99.00th=[ 2802], 99.50th=[ 2868], 99.90th=[ 3785], 99.95th=[ 4228], 00:17:30.365 | 99.99th=[ 4686] 00:17:30.365 bw ( KiB/s): min=91792, max=98304, per=99.94%, avg=95283.67, stdev=2450.05, samples=9 00:17:30.365 iops : min=22948, max=24576, avg=23820.89, stdev=612.52, samples=9 00:17:30.365 lat (msec) : 2=5.73%, 4=94.20%, 10=0.07% 00:17:30.365 cpu : usr=49.79%, sys=46.57%, ctx=15, majf=0, minf=763 00:17:30.365 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:30.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.365 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:30.365 issued rwts: total=0,119217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:30.365 00:17:30.365 Run status group 0 (all jobs): 00:17:30.365 WRITE: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=466MiB (488MB), run=5002-5002msec 00:17:30.934 ----------------------------------------------------- 00:17:30.934 Suppressions used: 00:17:30.934 count bytes template 00:17:30.934 1 11 /usr/src/fio/parse.c 00:17:30.934 1 8 libtcmalloc_minimal.so 00:17:30.934 1 904 libcrypto.so 00:17:30.934 ----------------------------------------------------- 00:17:30.934 00:17:30.934 ************************************ 00:17:30.934 END TEST xnvme_fio_plugin 00:17:30.934 ************************************ 00:17:30.934 00:17:30.934 real 0m15.056s 00:17:30.934 user 0m8.808s 00:17:30.934 sys 0m5.629s 00:17:30.934 13:15:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.934 13:15:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:31.194 Process with pid 74312 is not found 00:17:31.194 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 74312 00:17:31.194 13:15:22 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74312 ']' 00:17:31.194 13:15:22 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 74312 00:17:31.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74312) - No such process 00:17:31.194 13:15:22 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 74312 is not found' 00:17:31.194 00:17:31.194 real 3m57.762s 00:17:31.194 user 2m8.366s 00:17:31.194 sys 1m31.576s 00:17:31.194 13:15:22 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.194 ************************************ 00:17:31.194 END TEST nvme_xnvme 00:17:31.194 ************************************ 00:17:31.194 13:15:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:31.194 13:15:22 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:31.194 13:15:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.194 13:15:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.194 13:15:22 -- common/autotest_common.sh@10 -- # set +x 00:17:31.194 ************************************ 00:17:31.194 START TEST blockdev_xnvme 00:17:31.194 ************************************ 00:17:31.194 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:31.194 * Looking for test storage... 00:17:31.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:31.194 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:31.453 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:17:31.453 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:31.453 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:31.453 13:15:22 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.453 13:15:22 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.453 13:15:22 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.453 13:15:22 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.454 13:15:22 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.454 --rc genhtml_branch_coverage=1 00:17:31.454 --rc genhtml_function_coverage=1 00:17:31.454 --rc genhtml_legend=1 00:17:31.454 --rc geninfo_all_blocks=1 00:17:31.454 --rc geninfo_unexecuted_blocks=1 00:17:31.454 00:17:31.454 ' 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.454 --rc genhtml_branch_coverage=1 00:17:31.454 --rc genhtml_function_coverage=1 00:17:31.454 --rc genhtml_legend=1 00:17:31.454 --rc geninfo_all_blocks=1 00:17:31.454 --rc geninfo_unexecuted_blocks=1 00:17:31.454 00:17:31.454 ' 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.454 --rc genhtml_branch_coverage=1 00:17:31.454 --rc genhtml_function_coverage=1 00:17:31.454 --rc genhtml_legend=1 00:17:31.454 --rc geninfo_all_blocks=1 00:17:31.454 --rc geninfo_unexecuted_blocks=1 00:17:31.454 00:17:31.454 ' 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:31.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.454 --rc genhtml_branch_coverage=1 00:17:31.454 --rc genhtml_function_coverage=1 00:17:31.454 --rc genhtml_legend=1 00:17:31.454 --rc geninfo_all_blocks=1 00:17:31.454 --rc geninfo_unexecuted_blocks=1 00:17:31.454 00:17:31.454 ' 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74991 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:31.454 13:15:22 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74991 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74991 ']' 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.454 13:15:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:31.454 [2024-12-11 13:15:23.005435] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:17:31.454 [2024-12-11 13:15:23.005742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74991 ] 00:17:31.714 [2024-12-11 13:15:23.191270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.974 [2024-12-11 13:15:23.319360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.913 13:15:24 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.913 13:15:24 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:17:32.913 13:15:24 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:32.913 13:15:24 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:17:32.913 13:15:24 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:32.913 13:15:24 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:32.913 13:15:24 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:33.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:34.419 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:34.419 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:34.419 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:17:34.419 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:17:34.419 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.419 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:17:34.420 nvme0n1 00:17:34.420 nvme0n2 00:17:34.420 nvme0n3 00:17:34.420 nvme1n1 00:17:34.420 nvme2n1 00:17:34.420 nvme3n1 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 13:15:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 13:15:25 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:34.680 13:15:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.680 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:34.680 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4e578f2a-9c47-4583-829e-4d8f4e039392"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e578f2a-9c47-4583-829e-4d8f4e039392",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "015220e1-c91c-4a97-9e28-4947da4f25e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "015220e1-c91c-4a97-9e28-4947da4f25e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "471a34c6-4b7d-4617-b39b-3e3777debd5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "471a34c6-4b7d-4617-b39b-3e3777debd5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "502baf9f-6844-4f55-9197-79b701fa3092"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "502baf9f-6844-4f55-9197-79b701fa3092",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:34.681 ' "852e21a8-4931-471f-b07d-a9fc756d7344"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "852e21a8-4931-471f-b07d-a9fc756d7344",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37beb1fa-d522-4e60-b11b-a1c240799495"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37beb1fa-d522-4e60-b11b-a1c240799495",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:34.681 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:34.681 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:17:34.681 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:34.681 13:15:26 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74991 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74991 ']' 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74991 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74991 00:17:34.681 killing process with pid 74991 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74991' 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74991 00:17:34.681 13:15:26 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74991 00:17:37.283 13:15:28 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:37.283 13:15:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:37.283 13:15:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:37.283 13:15:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.283 13:15:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.283 ************************************ 00:17:37.283 START TEST bdev_hello_world 00:17:37.283 ************************************ 00:17:37.283 13:15:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:37.283 [2024-12-11 13:15:28.751830] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:17:37.283 [2024-12-11 13:15:28.751980] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75292 ] 00:17:37.542 [2024-12-11 13:15:28.938230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.542 [2024-12-11 13:15:29.063051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.110 [2024-12-11 13:15:29.557035] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:38.110 [2024-12-11 13:15:29.557093] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:38.110 [2024-12-11 13:15:29.557129] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:38.110 [2024-12-11 13:15:29.559508] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:38.110 [2024-12-11 13:15:29.559996] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:38.110 [2024-12-11 13:15:29.560028] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:38.110 [2024-12-11 13:15:29.560247] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:38.110 00:17:38.110 [2024-12-11 13:15:29.560270] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:39.490 00:17:39.490 real 0m2.089s 00:17:39.490 user 0m1.657s 00:17:39.490 sys 0m0.316s 00:17:39.490 13:15:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.490 ************************************ 00:17:39.490 END TEST bdev_hello_world 00:17:39.490 ************************************ 00:17:39.490 13:15:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:39.490 13:15:30 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:39.490 13:15:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.490 13:15:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.490 13:15:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:39.490 ************************************ 00:17:39.490 START TEST bdev_bounds 00:17:39.490 ************************************ 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:39.490 Process bdevio pid: 75331 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75331 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75331' 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75331 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 75331 ']' 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.490 13:15:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:39.490 [2024-12-11 13:15:30.933231] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:17:39.490 [2024-12-11 13:15:30.933382] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75331 ] 00:17:39.748 [2024-12-11 13:15:31.121098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:39.748 [2024-12-11 13:15:31.254677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.749 [2024-12-11 13:15:31.254754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.749 [2024-12-11 13:15:31.254760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.316 13:15:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.316 13:15:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:40.316 13:15:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:40.316 I/O targets: 00:17:40.316 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:40.316 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:40.316 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:40.316 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:40.316 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:40.316 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:40.316 00:17:40.316 00:17:40.316 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.316 http://cunit.sourceforge.net/ 00:17:40.316 00:17:40.316 00:17:40.316 Suite: bdevio tests on: nvme3n1 00:17:40.316 Test: blockdev write read block ...passed 00:17:40.316 Test: blockdev write zeroes read block ...passed 00:17:40.575 Test: blockdev write zeroes read no split ...passed 00:17:40.575 Test: blockdev write zeroes read split ...passed 00:17:40.575 Test: blockdev write zeroes read split partial ...passed 00:17:40.575 Test: blockdev reset ...passed 00:17:40.575 Test: blockdev write read 8 blocks ...passed 00:17:40.575 Test: blockdev write read size > 128k ...passed 00:17:40.575 Test: blockdev write read invalid size ...passed 00:17:40.575 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.575 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.575 Test: blockdev write read max offset ...passed 00:17:40.575 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.575 Test: blockdev writev readv 8 blocks ...passed 00:17:40.575 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.575 Test: blockdev writev readv block ...passed 00:17:40.575 Test: blockdev writev readv size > 128k ...passed 00:17:40.575 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.575 Test: blockdev comparev and writev ...passed 00:17:40.575 Test: blockdev nvme passthru rw ...passed 00:17:40.575 Test: blockdev nvme passthru vendor specific ...passed 00:17:40.575 Test: blockdev nvme admin passthru ...passed 00:17:40.575 Test: blockdev copy ...passed 00:17:40.575 Suite: bdevio tests on: nvme2n1 00:17:40.575 Test: blockdev write read block ...passed 00:17:40.575 Test: blockdev write zeroes read block ...passed 00:17:40.575 Test: blockdev write zeroes read no split ...passed 00:17:40.575 Test: blockdev write zeroes read split ...passed 00:17:40.575 Test: blockdev write zeroes read split partial ...passed 00:17:40.575 Test: blockdev reset ...passed 00:17:40.575 Test: blockdev write read 8 blocks ...passed 00:17:40.575 Test: blockdev write read size > 128k ...passed 00:17:40.575 Test: blockdev write read invalid size ...passed 00:17:40.575 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.575 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.575 Test: blockdev write read max offset ...passed 00:17:40.575 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.575 Test: blockdev writev readv 8 blocks ...passed 00:17:40.575 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.575 Test: blockdev writev readv block ...passed 00:17:40.575 Test: blockdev writev readv size > 128k ...passed 00:17:40.575 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.575 Test: blockdev comparev and writev ...passed 00:17:40.575 Test: blockdev nvme passthru rw ...passed 00:17:40.575 Test: blockdev nvme passthru vendor specific ...passed 00:17:40.575 Test: blockdev nvme admin passthru ...passed 00:17:40.575 Test: blockdev copy ...passed 00:17:40.575 Suite: bdevio tests on: nvme1n1 00:17:40.575 Test: blockdev write read block ...passed 00:17:40.575 Test: blockdev write zeroes read block ...passed 00:17:40.575 Test: blockdev write zeroes read no split ...passed 00:17:40.575 Test: blockdev write zeroes read split ...passed 00:17:40.834 Test: blockdev write zeroes read split partial ...passed 00:17:40.834 Test: blockdev reset ...passed 00:17:40.834 Test: blockdev write read 8 blocks ...passed 00:17:40.834 Test: blockdev write read size > 128k ...passed 00:17:40.834 Test: blockdev write read invalid size ...passed 00:17:40.834 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.834 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.834 Test: blockdev write read max offset ...passed 00:17:40.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.834 Test: blockdev writev readv 8 blocks ...passed 00:17:40.834 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.834 Test: blockdev writev readv block ...passed 00:17:40.834 Test: blockdev writev readv size > 128k ...passed 00:17:40.834 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.834 Test: blockdev comparev and writev ...passed 00:17:40.834 Test: blockdev nvme passthru rw ...passed 00:17:40.834 Test: blockdev nvme passthru vendor specific ...passed 00:17:40.834 Test: blockdev nvme admin passthru ...passed 00:17:40.834 Test: blockdev copy ...passed 00:17:40.834 Suite: bdevio tests on: nvme0n3 00:17:40.834 Test: blockdev write read block ...passed 00:17:40.834 Test: blockdev write zeroes read block ...passed 00:17:40.834 Test: blockdev write zeroes read no split ...passed 00:17:40.834 Test: blockdev write zeroes read split ...passed 00:17:40.834 Test: blockdev write zeroes read split partial ...passed 00:17:40.834 Test: blockdev reset ...passed 00:17:40.834 Test: blockdev write read 8 blocks ...passed 00:17:40.834 Test: blockdev write read size > 128k ...passed 00:17:40.834 Test: blockdev write read invalid size ...passed 00:17:40.834 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.834 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.834 Test: blockdev write read max offset ...passed 00:17:40.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.834 Test: blockdev writev readv 8 blocks ...passed 00:17:40.834 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.834 Test: blockdev writev readv block ...passed 00:17:40.834 Test: blockdev writev readv size > 128k ...passed 00:17:40.834 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.834 Test: blockdev comparev and writev ...passed 00:17:40.834 Test: blockdev nvme passthru rw ...passed 00:17:40.834 Test: blockdev nvme passthru vendor specific ...passed 00:17:40.834 Test: blockdev nvme admin passthru ...passed 00:17:40.834 Test: blockdev copy ...passed 00:17:40.834 Suite: bdevio tests on: nvme0n2 00:17:40.834 Test: blockdev write read block ...passed 00:17:40.834 Test: blockdev write zeroes read block ...passed 00:17:40.834 Test: blockdev write zeroes read no split ...passed 00:17:40.834 Test: blockdev write zeroes read split ...passed 00:17:40.834 Test: blockdev write zeroes read split partial ...passed 00:17:40.834 Test: blockdev reset ...passed 00:17:40.834 Test: blockdev write read 8 blocks ...passed 00:17:40.834 Test: blockdev write read size > 128k ...passed 00:17:40.834 Test: blockdev write read invalid size ...passed 00:17:40.834 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.834 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.834 Test: blockdev write read max offset ...passed 00:17:40.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.835 Test: blockdev writev readv 8 blocks ...passed 00:17:40.835 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.835 Test: blockdev writev readv block ...passed 00:17:40.835 Test: blockdev writev readv size > 128k ...passed 00:17:40.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.835 Test: blockdev comparev and writev ...passed 00:17:40.835 Test: blockdev nvme passthru rw ...passed 00:17:40.835 Test: blockdev nvme passthru vendor specific ...passed 00:17:40.835 Test: blockdev nvme admin passthru ...passed 00:17:40.835 Test: blockdev copy ...passed 00:17:40.835 Suite: bdevio tests on: nvme0n1 00:17:40.835 Test: blockdev write read block ...passed 00:17:40.835 Test: blockdev write zeroes read block ...passed 00:17:40.835 Test: blockdev write zeroes read no split ...passed 00:17:40.835 Test: blockdev write zeroes read split ...passed 00:17:40.835 Test: blockdev write zeroes read split partial ...passed 00:17:40.835 Test: blockdev reset ...passed 00:17:40.835 Test: blockdev write read 8 blocks ...passed 00:17:40.835 Test: blockdev write read size > 128k ...passed 00:17:40.835 Test: blockdev write read invalid size ...passed 00:17:40.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.835 Test: blockdev write read max offset ...passed 00:17:40.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.835 Test: blockdev writev readv 8 blocks ...passed 00:17:40.835 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.835 Test: blockdev writev readv block ...passed 00:17:40.835 Test: blockdev writev readv size > 128k ...passed 00:17:40.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.835 Test: blockdev comparev and writev ...passed 00:17:40.835 Test: blockdev nvme passthru rw ...passed 00:17:40.835 Test: blockdev nvme passthru vendor specific ...passed 00:17:40.835 Test: blockdev nvme admin passthru ...passed 00:17:40.835 Test: blockdev copy ...passed 00:17:40.835 00:17:40.835 Run Summary: Type Total Ran Passed Failed Inactive 00:17:40.835 suites 6 6 n/a 0 0 00:17:40.835 tests 138 138 138 0 0 00:17:40.835 asserts 780 780 780 0 n/a 00:17:40.835 00:17:40.835 Elapsed time = 1.443 seconds 00:17:40.835 0 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75331 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 75331 ']' 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 75331 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75331 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.094 killing process with pid 75331 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75331' 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 75331 00:17:41.094 13:15:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 75331 00:17:42.474 13:15:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:42.474 00:17:42.474 real 0m2.867s 00:17:42.474 user 0m6.978s 00:17:42.474 sys 0m0.517s 00:17:42.474 13:15:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.474 ************************************ 00:17:42.474 END TEST bdev_bounds 00:17:42.474 ************************************ 00:17:42.474 13:15:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:42.474 13:15:33 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:42.474 13:15:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:42.474 13:15:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.474 13:15:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:42.474 ************************************ 00:17:42.474 START TEST bdev_nbd 00:17:42.474 ************************************ 00:17:42.474 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:42.474 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:42.474 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:42.474 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:42.474 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:42.474 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75396 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75396 /var/tmp/spdk-nbd.sock 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 75396 ']' 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.475 13:15:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:42.475 [2024-12-11 13:15:33.886859] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:17:42.475 [2024-12-11 13:15:33.887009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.734 [2024-12-11 13:15:34.075187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.734 [2024-12-11 13:15:34.203394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:43.302 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.562 1+0 records in 00:17:43.562 1+0 records out 00:17:43.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612221 s, 6.7 MB/s 00:17:43.562 13:15:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:43.562 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.824 1+0 records in 00:17:43.824 1+0 records out 00:17:43.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661318 s, 6.2 MB/s 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:43.824 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.083 1+0 records in 00:17:44.083 1+0 records out 00:17:44.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596596 s, 6.9 MB/s 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.083 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.084 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:44.084 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:44.084 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:44.084 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.343 1+0 records in 00:17:44.343 1+0 records out 00:17:44.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697444 s, 5.9 MB/s 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:44.343 13:15:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.602 1+0 records in 00:17:44.602 1+0 records out 00:17:44.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00355414 s, 1.2 MB/s 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:44.602 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.862 1+0 records in 00:17:44.862 1+0 records out 00:17:44.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765075 s, 5.4 MB/s 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:44.862 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd0", 00:17:45.121 "bdev_name": "nvme0n1" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd1", 00:17:45.121 "bdev_name": "nvme0n2" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd2", 00:17:45.121 "bdev_name": "nvme0n3" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd3", 00:17:45.121 "bdev_name": "nvme1n1" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd4", 00:17:45.121 "bdev_name": "nvme2n1" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd5", 00:17:45.121 "bdev_name": "nvme3n1" 00:17:45.121 } 00:17:45.121 ]' 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd0", 00:17:45.121 "bdev_name": "nvme0n1" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd1", 00:17:45.121 "bdev_name": "nvme0n2" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd2", 00:17:45.121 "bdev_name": "nvme0n3" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd3", 00:17:45.121 "bdev_name": "nvme1n1" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd4", 00:17:45.121 "bdev_name": "nvme2n1" 00:17:45.121 }, 00:17:45.121 { 00:17:45.121 "nbd_device": "/dev/nbd5", 00:17:45.121 "bdev_name": "nvme3n1" 00:17:45.121 } 00:17:45.121 ]' 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.121 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.380 13:15:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:45.639 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:45.639 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:45.639 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:45.639 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.640 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.640 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:45.640 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:45.640 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.640 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.640 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.899 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.158 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:46.417 13:15:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:46.676 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:46.936 /dev/nbd0 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.936 1+0 records in 00:17:46.936 1+0 records out 00:17:46.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665558 s, 6.2 MB/s 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:46.936 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:47.195 /dev/nbd1 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.195 1+0 records in 00:17:47.195 1+0 records out 00:17:47.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00168388 s, 2.4 MB/s 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:47.195 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:47.455 /dev/nbd10 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.455 1+0 records in 00:17:47.455 1+0 records out 00:17:47.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775999 s, 5.3 MB/s 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:47.455 13:15:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:47.714 /dev/nbd11 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.714 1+0 records in 00:17:47.714 1+0 records out 00:17:47.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823936 s, 5.0 MB/s 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:47.714 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:47.973 /dev/nbd12 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.973 1+0 records in 00:17:47.973 1+0 records out 00:17:47.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000984415 s, 4.2 MB/s 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.973 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:47.974 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:48.232 /dev/nbd13 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:48.232 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.233 1+0 records in 00:17:48.233 1+0 records out 00:17:48.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752563 s, 5.4 MB/s 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:48.233 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd0", 00:17:48.492 "bdev_name": "nvme0n1" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd1", 00:17:48.492 "bdev_name": "nvme0n2" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd10", 00:17:48.492 "bdev_name": "nvme0n3" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd11", 00:17:48.492 "bdev_name": "nvme1n1" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd12", 00:17:48.492 "bdev_name": "nvme2n1" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd13", 00:17:48.492 "bdev_name": "nvme3n1" 00:17:48.492 } 00:17:48.492 ]' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd0", 00:17:48.492 "bdev_name": "nvme0n1" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd1", 00:17:48.492 "bdev_name": "nvme0n2" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd10", 00:17:48.492 "bdev_name": "nvme0n3" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd11", 00:17:48.492 "bdev_name": "nvme1n1" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd12", 00:17:48.492 "bdev_name": "nvme2n1" 00:17:48.492 }, 00:17:48.492 { 00:17:48.492 "nbd_device": "/dev/nbd13", 00:17:48.492 "bdev_name": "nvme3n1" 00:17:48.492 } 00:17:48.492 ]' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:48.492 /dev/nbd1 00:17:48.492 /dev/nbd10 00:17:48.492 /dev/nbd11 00:17:48.492 /dev/nbd12 00:17:48.492 /dev/nbd13' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:48.492 /dev/nbd1 00:17:48.492 /dev/nbd10 00:17:48.492 /dev/nbd11 00:17:48.492 /dev/nbd12 00:17:48.492 /dev/nbd13' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:48.492 256+0 records in 00:17:48.492 256+0 records out 00:17:48.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116698 s, 89.9 MB/s 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:48.492 13:15:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:48.751 256+0 records in 00:17:48.751 256+0 records out 00:17:48.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123282 s, 8.5 MB/s 00:17:48.751 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:48.751 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:48.751 256+0 records in 00:17:48.751 256+0 records out 00:17:48.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126778 s, 8.3 MB/s 00:17:48.751 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:48.751 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:49.011 256+0 records in 00:17:49.011 256+0 records out 00:17:49.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12461 s, 8.4 MB/s 00:17:49.011 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:49.011 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:49.011 256+0 records in 00:17:49.011 256+0 records out 00:17:49.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123329 s, 8.5 MB/s 00:17:49.011 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:49.011 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:49.270 256+0 records in 00:17:49.270 256+0 records out 00:17:49.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147818 s, 7.1 MB/s 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:49.270 256+0 records in 00:17:49.270 256+0 records out 00:17:49.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123434 s, 8.5 MB/s 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.270 13:15:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:49.529 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.529 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.529 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.529 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.529 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.530 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.530 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:49.530 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.530 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.530 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.789 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.048 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.307 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.566 13:15:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:50.566 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:50.825 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:51.084 malloc_lvol_verify 00:17:51.084 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:51.343 814907ee-d6d3-4a67-b7be-4a40b478a478 00:17:51.343 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:51.602 d24aabc5-c8b4-4d9c-b69e-6aeb22d8ccb9 00:17:51.602 13:15:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:51.602 /dev/nbd0 00:17:51.602 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:51.602 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:51.861 mke2fs 1.47.0 (5-Feb-2023) 00:17:51.861 Discarding device blocks: 0/4096 done 00:17:51.861 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:51.861 00:17:51.861 Allocating group tables: 0/1 done 00:17:51.861 Writing inode tables: 0/1 done 00:17:51.861 Creating journal (1024 blocks): done 00:17:51.861 Writing superblocks and filesystem accounting information: 0/1 done 00:17:51.861 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75396 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 75396 ']' 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 75396 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.861 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75396 00:17:52.121 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.121 killing process with pid 75396 00:17:52.121 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.121 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75396' 00:17:52.121 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 75396 00:17:52.121 13:15:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 75396 00:17:53.501 ************************************ 00:17:53.501 END TEST bdev_nbd 00:17:53.501 ************************************ 00:17:53.501 13:15:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:53.501 00:17:53.501 real 0m10.939s 00:17:53.501 user 0m13.793s 00:17:53.501 sys 0m4.888s 00:17:53.501 13:15:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.501 13:15:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:53.501 13:15:44 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:53.501 13:15:44 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:53.501 13:15:44 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:53.501 13:15:44 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:53.501 13:15:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.501 13:15:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.501 13:15:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.501 ************************************ 00:17:53.501 START TEST bdev_fio 00:17:53.501 ************************************ 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:53.501 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:53.501 ************************************ 00:17:53.501 START TEST bdev_fio_rw_verify 00:17:53.501 ************************************ 00:17:53.501 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:53.502 13:15:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:53.760 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:53.760 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:53.760 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:53.760 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:53.760 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:53.760 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:53.760 fio-3.35 00:17:53.760 Starting 6 threads 00:18:05.963 00:18:05.963 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75808: Wed Dec 11 13:15:56 2024 00:18:05.963 read: IOPS=33.5k, BW=131MiB/s (137MB/s)(1310MiB/10001msec) 00:18:05.963 slat (usec): min=2, max=1971, avg= 6.82, stdev= 6.36 00:18:05.963 clat (usec): min=75, max=6148, avg=574.73, stdev=175.06 00:18:05.963 lat (usec): min=78, max=6153, avg=581.55, stdev=176.09 00:18:05.963 clat percentiles (usec): 00:18:05.963 | 50.000th=[ 611], 99.000th=[ 1004], 99.900th=[ 1582], 99.990th=[ 3556], 00:18:05.963 | 99.999th=[ 6128] 00:18:05.963 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1326MiB/10001msec); 0 zone resets 00:18:05.963 slat (usec): min=10, max=4564, avg=20.50, stdev=22.58 00:18:05.963 clat (usec): min=79, max=5207, avg=639.44, stdev=183.47 00:18:05.963 lat (usec): min=92, max=5224, avg=659.94, stdev=185.77 00:18:05.963 clat percentiles (usec): 00:18:05.963 | 50.000th=[ 652], 99.000th=[ 1205], 99.900th=[ 1876], 99.990th=[ 3392], 00:18:05.963 | 99.999th=[ 3654] 00:18:05.963 bw ( KiB/s): min=111560, max=150464, per=99.86%, avg=135618.74, stdev=1960.13, samples=114 00:18:05.963 iops : min=27890, max=37616, avg=33904.42, stdev=490.04, samples=114 00:18:05.963 lat (usec) : 100=0.01%, 250=3.84%, 500=17.78%, 750=66.15%, 1000=10.35% 00:18:05.963 lat (msec) : 2=1.81%, 4=0.05%, 10=0.01% 00:18:05.963 cpu : usr=65.72%, sys=23.31%, ctx=8128, majf=0, minf=27853 00:18:05.963 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.963 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.963 issued rwts: total=335367,339550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:05.963 00:18:05.963 Run status group 0 (all jobs): 00:18:05.963 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1310MiB (1374MB), run=10001-10001msec 00:18:05.963 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1326MiB (1391MB), run=10001-10001msec 00:18:06.223 ----------------------------------------------------- 00:18:06.223 Suppressions used: 00:18:06.223 count bytes template 00:18:06.223 6 48 /usr/src/fio/parse.c 00:18:06.223 3953 379488 /usr/src/fio/iolog.c 00:18:06.223 1 8 libtcmalloc_minimal.so 00:18:06.223 1 904 libcrypto.so 00:18:06.223 ----------------------------------------------------- 00:18:06.223 00:18:06.223 00:18:06.223 real 0m12.688s 00:18:06.223 user 0m41.518s 00:18:06.223 sys 0m14.507s 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:06.223 ************************************ 00:18:06.223 END TEST bdev_fio_rw_verify 00:18:06.223 ************************************ 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:06.223 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4e578f2a-9c47-4583-829e-4d8f4e039392"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e578f2a-9c47-4583-829e-4d8f4e039392",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "015220e1-c91c-4a97-9e28-4947da4f25e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "015220e1-c91c-4a97-9e28-4947da4f25e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "471a34c6-4b7d-4617-b39b-3e3777debd5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "471a34c6-4b7d-4617-b39b-3e3777debd5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "502baf9f-6844-4f55-9197-79b701fa3092"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "502baf9f-6844-4f55-9197-79b701fa3092",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "852e21a8-4931-471f-b07d-a9fc756d7344"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "852e21a8-4931-471f-b07d-a9fc756d7344",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37beb1fa-d522-4e60-b11b-a1c240799495"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37beb1fa-d522-4e60-b11b-a1c240799495",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:06.224 /home/vagrant/spdk_repo/spdk 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:06.224 00:18:06.224 real 0m12.924s 00:18:06.224 user 0m41.633s 00:18:06.224 sys 0m14.635s 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.224 13:15:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:06.224 ************************************ 00:18:06.224 END TEST bdev_fio 00:18:06.224 ************************************ 00:18:06.224 13:15:57 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:06.224 13:15:57 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:06.224 13:15:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:06.224 13:15:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.224 13:15:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:06.484 ************************************ 00:18:06.484 START TEST bdev_verify 00:18:06.484 ************************************ 00:18:06.484 13:15:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:06.484 [2024-12-11 13:15:57.901662] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:06.484 [2024-12-11 13:15:57.901817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75977 ] 00:18:06.743 [2024-12-11 13:15:58.089421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:06.743 [2024-12-11 13:15:58.221463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.743 [2024-12-11 13:15:58.221495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.311 Running I/O for 5 seconds... 00:18:09.620 25056.00 IOPS, 97.88 MiB/s [2024-12-11T13:16:02.123Z] 25104.00 IOPS, 98.06 MiB/s [2024-12-11T13:16:03.059Z] 25269.33 IOPS, 98.71 MiB/s [2024-12-11T13:16:03.996Z] 25320.00 IOPS, 98.91 MiB/s [2024-12-11T13:16:03.996Z] 25152.00 IOPS, 98.25 MiB/s 00:18:12.428 Latency(us) 00:18:12.428 [2024-12-11T13:16:03.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.428 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x0 length 0x80000 00:18:12.428 nvme0n1 : 5.05 1876.89 7.33 0.00 0.00 68098.86 13475.68 56429.39 00:18:12.428 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x80000 length 0x80000 00:18:12.428 nvme0n1 : 5.05 1951.87 7.62 0.00 0.00 65476.28 9264.53 62746.11 00:18:12.428 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x0 length 0x80000 00:18:12.428 nvme0n2 : 5.06 1871.52 7.31 0.00 0.00 68204.02 14739.02 60640.54 00:18:12.428 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x80000 length 0x80000 00:18:12.428 nvme0n2 : 5.06 1947.61 7.61 0.00 0.00 65536.46 9527.72 63588.34 00:18:12.428 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x0 length 0x80000 00:18:12.428 nvme0n3 : 5.06 1870.82 7.31 0.00 0.00 68139.66 10001.48 66115.03 00:18:12.428 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x80000 length 0x80000 00:18:12.428 nvme0n3 : 5.04 1928.34 7.53 0.00 0.00 66109.47 15686.53 60219.42 00:18:12.428 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x0 length 0x20000 00:18:12.428 nvme1n1 : 5.07 1917.32 7.49 0.00 0.00 66383.21 7053.67 67378.38 00:18:12.428 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x20000 length 0x20000 00:18:12.428 nvme1n1 : 5.05 1927.88 7.53 0.00 0.00 66039.23 10633.15 64851.69 00:18:12.428 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x0 length 0xbd0bd 00:18:12.428 nvme2n1 : 5.07 2888.07 11.28 0.00 0.00 43943.21 4895.46 59798.31 00:18:12.428 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:12.428 nvme2n1 : 5.07 2931.04 11.45 0.00 0.00 43342.69 5921.93 60640.54 00:18:12.428 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0x0 length 0xa0000 00:18:12.428 nvme3n1 : 5.07 1893.67 7.40 0.00 0.00 66919.07 7474.79 64430.57 00:18:12.428 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.428 Verification LBA range: start 0xa0000 length 0xa0000 00:18:12.428 nvme3n1 : 5.06 1948.33 7.61 0.00 0.00 65163.77 7369.51 63167.23 00:18:12.428 [2024-12-11T13:16:03.996Z] =================================================================================================================== 00:18:12.428 [2024-12-11T13:16:03.996Z] Total : 24953.37 97.47 0.00 0.00 61227.51 4895.46 67378.38 00:18:13.807 00:18:13.807 real 0m7.329s 00:18:13.807 user 0m11.013s 00:18:13.807 sys 0m2.240s 00:18:13.807 13:16:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.807 13:16:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:13.807 ************************************ 00:18:13.807 END TEST bdev_verify 00:18:13.807 ************************************ 00:18:13.807 13:16:05 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:13.807 13:16:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:13.807 13:16:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.807 13:16:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:13.807 ************************************ 00:18:13.807 START TEST bdev_verify_big_io 00:18:13.807 ************************************ 00:18:13.807 13:16:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:13.807 [2024-12-11 13:16:05.300049] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:13.807 [2024-12-11 13:16:05.300196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76078 ] 00:18:14.066 [2024-12-11 13:16:05.484965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.066 [2024-12-11 13:16:05.619620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.066 [2024-12-11 13:16:05.619651] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.003 Running I/O for 5 seconds... 00:18:20.196 2264.00 IOPS, 141.50 MiB/s [2024-12-11T13:16:12.023Z] 3664.00 IOPS, 229.00 MiB/s [2024-12-11T13:16:12.023Z] 3514.67 IOPS, 219.67 MiB/s 00:18:20.455 Latency(us) 00:18:20.455 [2024-12-11T13:16:12.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.455 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x0 length 0x8000 00:18:20.455 nvme0n1 : 5.66 104.57 6.54 0.00 0.00 1201149.04 38953.12 1549702.68 00:18:20.455 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x8000 length 0x8000 00:18:20.455 nvme0n1 : 5.41 212.89 13.31 0.00 0.00 581373.74 5448.17 727686.48 00:18:20.455 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x0 length 0x8000 00:18:20.455 nvme0n2 : 5.67 124.16 7.76 0.00 0.00 991538.86 24319.38 909608.10 00:18:20.455 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x8000 length 0x8000 00:18:20.455 nvme0n2 : 5.60 228.69 14.29 0.00 0.00 533600.23 57271.62 697366.21 00:18:20.455 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x0 length 0x8000 00:18:20.455 nvme0n3 : 5.67 87.43 5.46 0.00 0.00 1361898.93 3842.67 2236962.13 00:18:20.455 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x8000 length 0x8000 00:18:20.455 nvme0n3 : 5.55 218.91 13.68 0.00 0.00 538954.12 95593.07 485124.32 00:18:20.455 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x0 length 0x2000 00:18:20.455 nvme1n1 : 5.68 112.66 7.04 0.00 0.00 1039119.34 31794.17 1650770.25 00:18:20.455 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x2000 length 0x2000 00:18:20.455 nvme1n1 : 5.66 203.47 12.72 0.00 0.00 576835.97 60640.54 697366.21 00:18:20.455 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0x0 length 0xbd0b 00:18:20.455 nvme2n1 : 5.68 121.03 7.56 0.00 0.00 944304.01 12528.17 2439097.27 00:18:20.455 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:20.455 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:20.455 nvme2n1 : 5.67 225.76 14.11 0.00 0.00 510282.21 9843.56 670414.86 00:18:20.456 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:20.456 Verification LBA range: start 0x0 length 0xa000 00:18:20.456 nvme3n1 : 5.69 123.77 7.74 0.00 0.00 890634.00 28425.25 976986.47 00:18:20.456 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:20.456 Verification LBA range: start 0xa000 length 0xa000 00:18:20.456 nvme3n1 : 5.67 225.60 14.10 0.00 0.00 498211.35 2553.01 673783.78 00:18:20.456 [2024-12-11T13:16:12.024Z] =================================================================================================================== 00:18:20.456 [2024-12-11T13:16:12.024Z] Total : 1988.92 124.31 0.00 0.00 714400.50 2553.01 2439097.27 00:18:21.834 00:18:21.834 real 0m8.190s 00:18:21.834 user 0m14.759s 00:18:21.834 sys 0m0.642s 00:18:21.834 13:16:13 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.834 13:16:13 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.834 ************************************ 00:18:21.834 END TEST bdev_verify_big_io 00:18:21.834 ************************************ 00:18:22.093 13:16:13 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:22.093 13:16:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:22.093 13:16:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.093 13:16:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:22.093 ************************************ 00:18:22.093 START TEST bdev_write_zeroes 00:18:22.093 ************************************ 00:18:22.093 13:16:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:22.093 [2024-12-11 13:16:13.566793] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:22.093 [2024-12-11 13:16:13.566918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76192 ] 00:18:22.352 [2024-12-11 13:16:13.750560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.352 [2024-12-11 13:16:13.857484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.921 Running I/O for 1 seconds... 00:18:23.884 51008.00 IOPS, 199.25 MiB/s 00:18:23.884 Latency(us) 00:18:23.884 [2024-12-11T13:16:15.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.884 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.884 nvme0n1 : 1.02 7905.79 30.88 0.00 0.00 16175.52 8159.10 29267.48 00:18:23.884 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.884 nvme0n2 : 1.02 7895.37 30.84 0.00 0.00 16187.84 8369.66 29688.60 00:18:23.884 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.884 nvme0n3 : 1.02 7886.30 30.81 0.00 0.00 16197.34 8422.30 30320.27 00:18:23.884 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.884 nvme1n1 : 1.02 7879.50 30.78 0.00 0.00 16201.30 8422.30 30741.38 00:18:23.884 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.884 nvme2n1 : 1.03 11250.29 43.95 0.00 0.00 11337.30 4421.71 22634.92 00:18:23.884 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.884 nvme3n1 : 1.02 7871.46 30.75 0.00 0.00 16129.89 3974.27 30530.83 00:18:23.884 [2024-12-11T13:16:15.452Z] =================================================================================================================== 00:18:23.884 [2024-12-11T13:16:15.452Z] Total : 50688.72 198.00 0.00 0.00 15097.94 3974.27 30741.38 00:18:25.264 00:18:25.264 real 0m3.028s 00:18:25.264 user 0m2.271s 00:18:25.264 sys 0m0.558s 00:18:25.264 13:16:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.264 13:16:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:25.264 ************************************ 00:18:25.264 END TEST bdev_write_zeroes 00:18:25.264 ************************************ 00:18:25.264 13:16:16 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:25.264 13:16:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:25.264 13:16:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.264 13:16:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.264 ************************************ 00:18:25.264 START TEST bdev_json_nonenclosed 00:18:25.264 ************************************ 00:18:25.264 13:16:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:25.264 [2024-12-11 13:16:16.679362] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:25.264 [2024-12-11 13:16:16.679483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76247 ] 00:18:25.524 [2024-12-11 13:16:16.862200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.524 [2024-12-11 13:16:16.992883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.524 [2024-12-11 13:16:16.992985] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:25.524 [2024-12-11 13:16:16.993008] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:25.524 [2024-12-11 13:16:16.993020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:25.784 00:18:25.784 real 0m0.671s 00:18:25.784 user 0m0.407s 00:18:25.784 sys 0m0.160s 00:18:25.784 13:16:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.784 13:16:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:25.784 ************************************ 00:18:25.784 END TEST bdev_json_nonenclosed 00:18:25.784 ************************************ 00:18:25.784 13:16:17 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:25.784 13:16:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:25.784 13:16:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.784 13:16:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.784 ************************************ 00:18:25.784 START TEST bdev_json_nonarray 00:18:25.784 ************************************ 00:18:25.784 13:16:17 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:26.043 [2024-12-11 13:16:17.430359] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:26.043 [2024-12-11 13:16:17.430481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76278 ] 00:18:26.302 [2024-12-11 13:16:17.610639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.302 [2024-12-11 13:16:17.737204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.302 [2024-12-11 13:16:17.737312] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:26.302 [2024-12-11 13:16:17.737337] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:26.302 [2024-12-11 13:16:17.737350] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:26.562 00:18:26.562 real 0m0.668s 00:18:26.562 user 0m0.404s 00:18:26.562 sys 0m0.160s 00:18:26.562 13:16:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.562 13:16:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:26.562 ************************************ 00:18:26.562 END TEST bdev_json_nonarray 00:18:26.562 ************************************ 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:26.562 13:16:18 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:27.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:34.073 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:34.073 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:34.073 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:34.073 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:34.073 00:18:34.073 real 1m2.207s 00:18:34.073 user 1m40.266s 00:18:34.073 sys 0m35.298s 00:18:34.073 13:16:24 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.073 13:16:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:34.073 ************************************ 00:18:34.073 END TEST blockdev_xnvme 00:18:34.073 ************************************ 00:18:34.073 13:16:24 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:34.073 13:16:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.073 13:16:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.073 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:18:34.073 ************************************ 00:18:34.073 START TEST ublk 00:18:34.073 ************************************ 00:18:34.073 13:16:24 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:34.073 * Looking for test storage... 00:18:34.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.073 13:16:25 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.073 13:16:25 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.073 13:16:25 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.073 13:16:25 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.073 13:16:25 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.073 13:16:25 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:34.073 13:16:25 ublk -- scripts/common.sh@345 -- # : 1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.073 13:16:25 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.073 13:16:25 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@353 -- # local d=1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.073 13:16:25 ublk -- scripts/common.sh@355 -- # echo 1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.073 13:16:25 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@353 -- # local d=2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.073 13:16:25 ublk -- scripts/common.sh@355 -- # echo 2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.073 13:16:25 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.073 13:16:25 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.073 13:16:25 ublk -- scripts/common.sh@368 -- # return 0 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.073 --rc genhtml_branch_coverage=1 00:18:34.073 --rc genhtml_function_coverage=1 00:18:34.073 --rc genhtml_legend=1 00:18:34.073 --rc geninfo_all_blocks=1 00:18:34.073 --rc geninfo_unexecuted_blocks=1 00:18:34.073 00:18:34.073 ' 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.073 --rc genhtml_branch_coverage=1 00:18:34.073 --rc genhtml_function_coverage=1 00:18:34.073 --rc genhtml_legend=1 00:18:34.073 --rc geninfo_all_blocks=1 00:18:34.073 --rc geninfo_unexecuted_blocks=1 00:18:34.073 00:18:34.073 ' 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.073 --rc genhtml_branch_coverage=1 00:18:34.073 --rc genhtml_function_coverage=1 00:18:34.073 --rc genhtml_legend=1 00:18:34.073 --rc geninfo_all_blocks=1 00:18:34.073 --rc geninfo_unexecuted_blocks=1 00:18:34.073 00:18:34.073 ' 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.073 --rc genhtml_branch_coverage=1 00:18:34.073 --rc genhtml_function_coverage=1 00:18:34.073 --rc genhtml_legend=1 00:18:34.073 --rc geninfo_all_blocks=1 00:18:34.073 --rc geninfo_unexecuted_blocks=1 00:18:34.073 00:18:34.073 ' 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:34.073 13:16:25 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:34.073 13:16:25 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:34.073 13:16:25 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:34.073 13:16:25 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:34.073 13:16:25 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:34.073 13:16:25 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:34.073 13:16:25 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:34.073 13:16:25 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:34.073 13:16:25 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.073 13:16:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:34.073 ************************************ 00:18:34.073 START TEST test_save_ublk_config 00:18:34.073 ************************************ 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76585 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76585 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76585 ']' 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.073 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.074 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.074 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.074 13:16:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:34.074 [2024-12-11 13:16:25.296605] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:34.074 [2024-12-11 13:16:25.296764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76585 ] 00:18:34.074 [2024-12-11 13:16:25.473492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.074 [2024-12-11 13:16:25.610071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.452 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.452 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:35.452 13:16:26 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:35.453 [2024-12-11 13:16:26.672157] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:35.453 [2024-12-11 13:16:26.673507] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:35.453 malloc0 00:18:35.453 [2024-12-11 13:16:26.774287] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:35.453 [2024-12-11 13:16:26.774424] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:35.453 [2024-12-11 13:16:26.774440] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:35.453 [2024-12-11 13:16:26.774450] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:35.453 [2024-12-11 13:16:26.782182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:35.453 [2024-12-11 13:16:26.782211] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:35.453 [2024-12-11 13:16:26.790150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:35.453 [2024-12-11 13:16:26.790272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:35.453 [2024-12-11 13:16:26.814160] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:35.453 0 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.453 13:16:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:35.712 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.712 13:16:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:35.712 "subsystems": [ 00:18:35.712 { 00:18:35.712 "subsystem": "fsdev", 00:18:35.712 "config": [ 00:18:35.712 { 00:18:35.712 "method": "fsdev_set_opts", 00:18:35.712 "params": { 00:18:35.712 "fsdev_io_pool_size": 65535, 00:18:35.712 "fsdev_io_cache_size": 256 00:18:35.712 } 00:18:35.712 } 00:18:35.712 ] 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "subsystem": "keyring", 00:18:35.712 "config": [] 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "subsystem": "iobuf", 00:18:35.712 "config": [ 00:18:35.712 { 00:18:35.712 "method": "iobuf_set_options", 00:18:35.712 "params": { 00:18:35.712 "small_pool_count": 8192, 00:18:35.712 "large_pool_count": 1024, 00:18:35.712 "small_bufsize": 8192, 00:18:35.712 "large_bufsize": 135168, 00:18:35.712 "enable_numa": false 00:18:35.712 } 00:18:35.712 } 00:18:35.712 ] 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "subsystem": "sock", 00:18:35.712 "config": [ 00:18:35.712 { 00:18:35.712 "method": "sock_set_default_impl", 00:18:35.712 "params": { 00:18:35.712 "impl_name": "posix" 00:18:35.712 } 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "method": "sock_impl_set_options", 00:18:35.712 "params": { 00:18:35.712 "impl_name": "ssl", 00:18:35.712 "recv_buf_size": 4096, 00:18:35.712 "send_buf_size": 4096, 00:18:35.712 "enable_recv_pipe": true, 00:18:35.712 "enable_quickack": false, 00:18:35.712 "enable_placement_id": 0, 00:18:35.712 "enable_zerocopy_send_server": true, 00:18:35.712 "enable_zerocopy_send_client": false, 00:18:35.712 "zerocopy_threshold": 0, 00:18:35.712 "tls_version": 0, 00:18:35.712 "enable_ktls": false 00:18:35.712 } 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "method": "sock_impl_set_options", 00:18:35.712 "params": { 00:18:35.712 "impl_name": "posix", 00:18:35.712 "recv_buf_size": 2097152, 00:18:35.712 "send_buf_size": 2097152, 00:18:35.712 "enable_recv_pipe": true, 00:18:35.712 "enable_quickack": false, 00:18:35.712 "enable_placement_id": 0, 00:18:35.712 "enable_zerocopy_send_server": true, 00:18:35.712 "enable_zerocopy_send_client": false, 00:18:35.712 "zerocopy_threshold": 0, 00:18:35.712 "tls_version": 0, 00:18:35.712 "enable_ktls": false 00:18:35.712 } 00:18:35.712 } 00:18:35.712 ] 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "subsystem": "vmd", 00:18:35.712 "config": [] 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "subsystem": "accel", 00:18:35.712 "config": [ 00:18:35.712 { 00:18:35.712 "method": "accel_set_options", 00:18:35.712 "params": { 00:18:35.712 "small_cache_size": 128, 00:18:35.712 "large_cache_size": 16, 00:18:35.712 "task_count": 2048, 00:18:35.712 "sequence_count": 2048, 00:18:35.712 "buf_count": 2048 00:18:35.712 } 00:18:35.712 } 00:18:35.712 ] 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "subsystem": "bdev", 00:18:35.712 "config": [ 00:18:35.712 { 00:18:35.712 "method": "bdev_set_options", 00:18:35.712 "params": { 00:18:35.712 "bdev_io_pool_size": 65535, 00:18:35.712 "bdev_io_cache_size": 256, 00:18:35.712 "bdev_auto_examine": true, 00:18:35.712 "iobuf_small_cache_size": 128, 00:18:35.712 "iobuf_large_cache_size": 16 00:18:35.712 } 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "method": "bdev_raid_set_options", 00:18:35.712 "params": { 00:18:35.712 "process_window_size_kb": 1024, 00:18:35.712 "process_max_bandwidth_mb_sec": 0 00:18:35.712 } 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "method": "bdev_iscsi_set_options", 00:18:35.712 "params": { 00:18:35.712 "timeout_sec": 30 00:18:35.712 } 00:18:35.712 }, 00:18:35.712 { 00:18:35.712 "method": "bdev_nvme_set_options", 00:18:35.712 "params": { 00:18:35.712 "action_on_timeout": "none", 00:18:35.712 "timeout_us": 0, 00:18:35.712 "timeout_admin_us": 0, 00:18:35.712 "keep_alive_timeout_ms": 10000, 00:18:35.712 "arbitration_burst": 0, 00:18:35.712 "low_priority_weight": 0, 00:18:35.712 "medium_priority_weight": 0, 00:18:35.712 "high_priority_weight": 0, 00:18:35.712 "nvme_adminq_poll_period_us": 10000, 00:18:35.712 "nvme_ioq_poll_period_us": 0, 00:18:35.712 "io_queue_requests": 0, 00:18:35.713 "delay_cmd_submit": true, 00:18:35.713 "transport_retry_count": 4, 00:18:35.713 "bdev_retry_count": 3, 00:18:35.713 "transport_ack_timeout": 0, 00:18:35.713 "ctrlr_loss_timeout_sec": 0, 00:18:35.713 "reconnect_delay_sec": 0, 00:18:35.713 "fast_io_fail_timeout_sec": 0, 00:18:35.713 "disable_auto_failback": false, 00:18:35.713 "generate_uuids": false, 00:18:35.713 "transport_tos": 0, 00:18:35.713 "nvme_error_stat": false, 00:18:35.713 "rdma_srq_size": 0, 00:18:35.713 "io_path_stat": false, 00:18:35.713 "allow_accel_sequence": false, 00:18:35.713 "rdma_max_cq_size": 0, 00:18:35.713 "rdma_cm_event_timeout_ms": 0, 00:18:35.713 "dhchap_digests": [ 00:18:35.713 "sha256", 00:18:35.713 "sha384", 00:18:35.713 "sha512" 00:18:35.713 ], 00:18:35.713 "dhchap_dhgroups": [ 00:18:35.713 "null", 00:18:35.713 "ffdhe2048", 00:18:35.713 "ffdhe3072", 00:18:35.713 "ffdhe4096", 00:18:35.713 "ffdhe6144", 00:18:35.713 "ffdhe8192" 00:18:35.713 ], 00:18:35.713 "rdma_umr_per_io": false 00:18:35.713 } 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "method": "bdev_nvme_set_hotplug", 00:18:35.713 "params": { 00:18:35.713 "period_us": 100000, 00:18:35.713 "enable": false 00:18:35.713 } 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "method": "bdev_malloc_create", 00:18:35.713 "params": { 00:18:35.713 "name": "malloc0", 00:18:35.713 "num_blocks": 8192, 00:18:35.713 "block_size": 4096, 00:18:35.713 "physical_block_size": 4096, 00:18:35.713 "uuid": "e9d0a61f-7a22-4676-9256-32052cbbcde8", 00:18:35.713 "optimal_io_boundary": 0, 00:18:35.713 "md_size": 0, 00:18:35.713 "dif_type": 0, 00:18:35.713 "dif_is_head_of_md": false, 00:18:35.713 "dif_pi_format": 0 00:18:35.713 } 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "method": "bdev_wait_for_examine" 00:18:35.713 } 00:18:35.713 ] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "scsi", 00:18:35.713 "config": null 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "scheduler", 00:18:35.713 "config": [ 00:18:35.713 { 00:18:35.713 "method": "framework_set_scheduler", 00:18:35.713 "params": { 00:18:35.713 "name": "static" 00:18:35.713 } 00:18:35.713 } 00:18:35.713 ] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "vhost_scsi", 00:18:35.713 "config": [] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "vhost_blk", 00:18:35.713 "config": [] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "ublk", 00:18:35.713 "config": [ 00:18:35.713 { 00:18:35.713 "method": "ublk_create_target", 00:18:35.713 "params": { 00:18:35.713 "cpumask": "1" 00:18:35.713 } 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "method": "ublk_start_disk", 00:18:35.713 "params": { 00:18:35.713 "bdev_name": "malloc0", 00:18:35.713 "ublk_id": 0, 00:18:35.713 "num_queues": 1, 00:18:35.713 "queue_depth": 128 00:18:35.713 } 00:18:35.713 } 00:18:35.713 ] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "nbd", 00:18:35.713 "config": [] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "nvmf", 00:18:35.713 "config": [ 00:18:35.713 { 00:18:35.713 "method": "nvmf_set_config", 00:18:35.713 "params": { 00:18:35.713 "discovery_filter": "match_any", 00:18:35.713 "admin_cmd_passthru": { 00:18:35.713 "identify_ctrlr": false 00:18:35.713 }, 00:18:35.713 "dhchap_digests": [ 00:18:35.713 "sha256", 00:18:35.713 "sha384", 00:18:35.713 "sha512" 00:18:35.713 ], 00:18:35.713 "dhchap_dhgroups": [ 00:18:35.713 "null", 00:18:35.713 "ffdhe2048", 00:18:35.713 "ffdhe3072", 00:18:35.713 "ffdhe4096", 00:18:35.713 "ffdhe6144", 00:18:35.713 "ffdhe8192" 00:18:35.713 ] 00:18:35.713 } 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "method": "nvmf_set_max_subsystems", 00:18:35.713 "params": { 00:18:35.713 "max_subsystems": 1024 00:18:35.713 } 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "method": "nvmf_set_crdt", 00:18:35.713 "params": { 00:18:35.713 "crdt1": 0, 00:18:35.713 "crdt2": 0, 00:18:35.713 "crdt3": 0 00:18:35.713 } 00:18:35.713 } 00:18:35.713 ] 00:18:35.713 }, 00:18:35.713 { 00:18:35.713 "subsystem": "iscsi", 00:18:35.713 "config": [ 00:18:35.713 { 00:18:35.713 "method": "iscsi_set_options", 00:18:35.713 "params": { 00:18:35.713 "node_base": "iqn.2016-06.io.spdk", 00:18:35.713 "max_sessions": 128, 00:18:35.713 "max_connections_per_session": 2, 00:18:35.713 "max_queue_depth": 64, 00:18:35.713 "default_time2wait": 2, 00:18:35.713 "default_time2retain": 20, 00:18:35.713 "first_burst_length": 8192, 00:18:35.713 "immediate_data": true, 00:18:35.713 "allow_duplicated_isid": false, 00:18:35.713 "error_recovery_level": 0, 00:18:35.713 "nop_timeout": 60, 00:18:35.713 "nop_in_interval": 30, 00:18:35.713 "disable_chap": false, 00:18:35.713 "require_chap": false, 00:18:35.713 "mutual_chap": false, 00:18:35.713 "chap_group": 0, 00:18:35.713 "max_large_datain_per_connection": 64, 00:18:35.713 "max_r2t_per_connection": 4, 00:18:35.713 "pdu_pool_size": 36864, 00:18:35.713 "immediate_data_pool_size": 16384, 00:18:35.713 "data_out_pool_size": 2048 00:18:35.713 } 00:18:35.713 } 00:18:35.713 ] 00:18:35.713 } 00:18:35.713 ] 00:18:35.713 }' 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76585 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76585 ']' 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76585 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76585 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.713 killing process with pid 76585 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76585' 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76585 00:18:35.713 13:16:27 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76585 00:18:37.619 [2024-12-11 13:16:28.725286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:37.619 [2024-12-11 13:16:28.761188] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:37.619 [2024-12-11 13:16:28.761317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:37.619 [2024-12-11 13:16:28.770169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:37.619 [2024-12-11 13:16:28.770231] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:37.619 [2024-12-11 13:16:28.770249] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:37.619 [2024-12-11 13:16:28.770280] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:37.619 [2024-12-11 13:16:28.770444] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76664 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76664 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76664 ']' 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:40.156 13:16:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:40.156 "subsystems": [ 00:18:40.156 { 00:18:40.156 "subsystem": "fsdev", 00:18:40.156 "config": [ 00:18:40.156 { 00:18:40.156 "method": "fsdev_set_opts", 00:18:40.156 "params": { 00:18:40.156 "fsdev_io_pool_size": 65535, 00:18:40.156 "fsdev_io_cache_size": 256 00:18:40.156 } 00:18:40.156 } 00:18:40.156 ] 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "subsystem": "keyring", 00:18:40.156 "config": [] 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "subsystem": "iobuf", 00:18:40.156 "config": [ 00:18:40.156 { 00:18:40.156 "method": "iobuf_set_options", 00:18:40.156 "params": { 00:18:40.156 "small_pool_count": 8192, 00:18:40.156 "large_pool_count": 1024, 00:18:40.156 "small_bufsize": 8192, 00:18:40.156 "large_bufsize": 135168, 00:18:40.156 "enable_numa": false 00:18:40.156 } 00:18:40.156 } 00:18:40.156 ] 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "subsystem": "sock", 00:18:40.156 "config": [ 00:18:40.156 { 00:18:40.156 "method": "sock_set_default_impl", 00:18:40.156 "params": { 00:18:40.156 "impl_name": "posix" 00:18:40.156 } 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "method": "sock_impl_set_options", 00:18:40.156 "params": { 00:18:40.156 "impl_name": "ssl", 00:18:40.156 "recv_buf_size": 4096, 00:18:40.156 "send_buf_size": 4096, 00:18:40.156 "enable_recv_pipe": true, 00:18:40.156 "enable_quickack": false, 00:18:40.156 "enable_placement_id": 0, 00:18:40.156 "enable_zerocopy_send_server": true, 00:18:40.156 "enable_zerocopy_send_client": false, 00:18:40.156 "zerocopy_threshold": 0, 00:18:40.156 "tls_version": 0, 00:18:40.156 "enable_ktls": false 00:18:40.156 } 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "method": "sock_impl_set_options", 00:18:40.156 "params": { 00:18:40.156 "impl_name": "posix", 00:18:40.156 "recv_buf_size": 2097152, 00:18:40.156 "send_buf_size": 2097152, 00:18:40.156 "enable_recv_pipe": true, 00:18:40.156 "enable_quickack": false, 00:18:40.156 "enable_placement_id": 0, 00:18:40.156 "enable_zerocopy_send_server": true, 00:18:40.156 "enable_zerocopy_send_client": false, 00:18:40.156 "zerocopy_threshold": 0, 00:18:40.156 "tls_version": 0, 00:18:40.156 "enable_ktls": false 00:18:40.156 } 00:18:40.156 } 00:18:40.156 ] 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "subsystem": "vmd", 00:18:40.156 "config": [] 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "subsystem": "accel", 00:18:40.156 "config": [ 00:18:40.156 { 00:18:40.156 "method": "accel_set_options", 00:18:40.156 "params": { 00:18:40.156 "small_cache_size": 128, 00:18:40.156 "large_cache_size": 16, 00:18:40.156 "task_count": 2048, 00:18:40.156 "sequence_count": 2048, 00:18:40.156 "buf_count": 2048 00:18:40.156 } 00:18:40.156 } 00:18:40.156 ] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "bdev", 00:18:40.157 "config": [ 00:18:40.157 { 00:18:40.157 "method": "bdev_set_options", 00:18:40.157 "params": { 00:18:40.157 "bdev_io_pool_size": 65535, 00:18:40.157 "bdev_io_cache_size": 256, 00:18:40.157 "bdev_auto_examine": true, 00:18:40.157 "iobuf_small_cache_size": 128, 00:18:40.157 "iobuf_large_cache_size": 16 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "bdev_raid_set_options", 00:18:40.157 "params": { 00:18:40.157 "process_window_size_kb": 1024, 00:18:40.157 "process_max_bandwidth_mb_sec": 0 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "bdev_iscsi_set_options", 00:18:40.157 "params": { 00:18:40.157 "timeout_sec": 30 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "bdev_nvme_set_options", 00:18:40.157 "params": { 00:18:40.157 "action_on_timeout": "none", 00:18:40.157 "timeout_us": 0, 00:18:40.157 "timeout_admin_us": 0, 00:18:40.157 "keep_alive_timeout_ms": 10000, 00:18:40.157 "arbitration_burst": 0, 00:18:40.157 "low_priority_weight": 0, 00:18:40.157 "medium_priority_weight": 0, 00:18:40.157 "high_priority_weight": 0, 00:18:40.157 "nvme_adminq_poll_period_us": 10000, 00:18:40.157 "nvme_ioq_poll_period_us": 0, 00:18:40.157 "io_queue_requests": 0, 00:18:40.157 "delay_cmd_submit": true, 00:18:40.157 "transport_retry_count": 4, 00:18:40.157 "bdev_retry_count": 3, 00:18:40.157 "transport_ack_timeout": 0, 00:18:40.157 "ctrlr_loss_timeout_sec": 0, 00:18:40.157 "reconnect_delay_sec": 0, 00:18:40.157 "fast_io_fail_timeout_sec": 0, 00:18:40.157 "disable_auto_failback": false, 00:18:40.157 "generate_uuids": false, 00:18:40.157 "transport_tos": 0, 00:18:40.157 "nvme_error_stat": false, 00:18:40.157 "rdma_srq_size": 0, 00:18:40.157 "io_path_stat": false, 00:18:40.157 "allow_accel_sequence": false, 00:18:40.157 "rdma_max_cq_size": 0, 00:18:40.157 "rdma_cm_event_timeout_ms": 0, 00:18:40.157 "dhchap_digests": [ 00:18:40.157 "sha256", 00:18:40.157 "sha384", 00:18:40.157 "sha512" 00:18:40.157 ], 00:18:40.157 "dhchap_dhgroups": [ 00:18:40.157 "null", 00:18:40.157 "ffdhe2048", 00:18:40.157 "ffdhe3072", 00:18:40.157 "ffdhe4096", 00:18:40.157 "ffdhe6144", 00:18:40.157 "ffdhe8192" 00:18:40.157 ], 00:18:40.157 "rdma_umr_per_io": false 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "bdev_nvme_set_hotplug", 00:18:40.157 "params": { 00:18:40.157 "period_us": 100000, 00:18:40.157 "enable": false 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "bdev_malloc_create", 00:18:40.157 "params": { 00:18:40.157 "name": "malloc0", 00:18:40.157 "num_blocks": 8192, 00:18:40.157 "block_size": 4096, 00:18:40.157 "physical_block_size": 4096, 00:18:40.157 "uuid": "e9d0a61f-7a22-4676-9256-32052cbbcde8", 00:18:40.157 "optimal_io_boundary": 0, 00:18:40.157 "md_size": 0, 00:18:40.157 "dif_type": 0, 00:18:40.157 "dif_is_head_of_md": false, 00:18:40.157 "dif_pi_format": 0 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "bdev_wait_for_examine" 00:18:40.157 } 00:18:40.157 ] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "scsi", 00:18:40.157 "config": null 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "scheduler", 00:18:40.157 "config": [ 00:18:40.157 { 00:18:40.157 "method": "framework_set_scheduler", 00:18:40.157 "params": { 00:18:40.157 "name": "static" 00:18:40.157 } 00:18:40.157 } 00:18:40.157 ] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "vhost_scsi", 00:18:40.157 "config": [] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "vhost_blk", 00:18:40.157 "config": [] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "ublk", 00:18:40.157 "config": [ 00:18:40.157 { 00:18:40.157 "method": "ublk_create_target", 00:18:40.157 "params": { 00:18:40.157 "cpumask": "1" 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "ublk_start_disk", 00:18:40.157 "params": { 00:18:40.157 "bdev_name": "malloc0", 00:18:40.157 "ublk_id": 0, 00:18:40.157 "num_queues": 1, 00:18:40.157 "queue_depth": 128 00:18:40.157 } 00:18:40.157 } 00:18:40.157 ] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "nbd", 00:18:40.157 "config": [] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "nvmf", 00:18:40.157 "config": [ 00:18:40.157 { 00:18:40.157 "method": "nvmf_set_config", 00:18:40.157 "params": { 00:18:40.157 "discovery_filter": "match_any", 00:18:40.157 "admin_cmd_passthru": { 00:18:40.157 "identify_ctrlr": false 00:18:40.157 }, 00:18:40.157 "dhchap_digests": [ 00:18:40.157 "sha256", 00:18:40.157 "sha384", 00:18:40.157 "sha512" 00:18:40.157 ], 00:18:40.157 "dhchap_dhgroups": [ 00:18:40.157 "null", 00:18:40.157 "ffdhe2048", 00:18:40.157 "ffdhe3072", 00:18:40.157 "ffdhe4096", 00:18:40.157 "ffdhe6144", 00:18:40.157 "ffdhe8192" 00:18:40.157 ] 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "nvmf_set_max_subsystems", 00:18:40.157 "params": { 00:18:40.157 "max_subsystems": 1024 00:18:40.157 } 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "method": "nvmf_set_crdt", 00:18:40.157 "params": { 00:18:40.157 "crdt1": 0, 00:18:40.157 "crdt2": 0, 00:18:40.157 "crdt3": 0 00:18:40.157 } 00:18:40.157 } 00:18:40.157 ] 00:18:40.157 }, 00:18:40.157 { 00:18:40.157 "subsystem": "iscsi", 00:18:40.157 "config": [ 00:18:40.157 { 00:18:40.157 "method": "iscsi_set_options", 00:18:40.157 "params": { 00:18:40.157 "node_base": "iqn.2016-06.io.spdk", 00:18:40.157 "max_sessions": 128, 00:18:40.157 "max_connections_per_session": 2, 00:18:40.157 "max_queue_depth": 64, 00:18:40.157 "default_time2wait": 2, 00:18:40.157 "default_time2retain": 20, 00:18:40.157 "first_burst_length": 8192, 00:18:40.157 "immediate_data": true, 00:18:40.157 "allow_duplicated_isid": false, 00:18:40.157 "error_recovery_level": 0, 00:18:40.157 "nop_timeout": 60, 00:18:40.157 "nop_in_interval": 30, 00:18:40.157 "disable_chap": false, 00:18:40.157 "require_chap": false, 00:18:40.157 "mutual_chap": false, 00:18:40.157 "chap_group": 0, 00:18:40.157 "max_large_datain_per_connection": 64, 00:18:40.157 "max_r2t_per_connection": 4, 00:18:40.157 "pdu_pool_size": 36864, 00:18:40.157 "immediate_data_pool_size": 16384, 00:18:40.157 "data_out_pool_size": 2048 00:18:40.157 } 00:18:40.157 } 00:18:40.157 ] 00:18:40.157 } 00:18:40.157 ] 00:18:40.157 }' 00:18:40.157 [2024-12-11 13:16:31.373601] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:40.157 [2024-12-11 13:16:31.373753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76664 ] 00:18:40.157 [2024-12-11 13:16:31.558085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.157 [2024-12-11 13:16:31.690249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.537 [2024-12-11 13:16:32.883145] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:41.537 [2024-12-11 13:16:32.884407] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:41.537 [2024-12-11 13:16:32.891282] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:41.537 [2024-12-11 13:16:32.891378] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:41.537 [2024-12-11 13:16:32.891393] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:41.537 [2024-12-11 13:16:32.891402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:41.537 [2024-12-11 13:16:32.900226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:41.537 [2024-12-11 13:16:32.900250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:41.537 [2024-12-11 13:16:32.907146] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:41.537 [2024-12-11 13:16:32.907251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:41.537 [2024-12-11 13:16:32.924125] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:41.537 13:16:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76664 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76664 ']' 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76664 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76664 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76664' 00:18:41.537 killing process with pid 76664 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76664 00:18:41.537 13:16:33 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76664 00:18:43.443 [2024-12-11 13:16:34.723077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:43.443 [2024-12-11 13:16:34.756240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:43.443 [2024-12-11 13:16:34.756381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:43.443 [2024-12-11 13:16:34.763142] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:43.443 [2024-12-11 13:16:34.763204] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:43.443 [2024-12-11 13:16:34.763215] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:43.443 [2024-12-11 13:16:34.763244] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:43.443 [2024-12-11 13:16:34.763406] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:45.353 13:16:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:45.353 00:18:45.353 real 0m11.584s 00:18:45.353 user 0m8.302s 00:18:45.353 sys 0m4.029s 00:18:45.353 13:16:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.353 13:16:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:45.353 ************************************ 00:18:45.353 END TEST test_save_ublk_config 00:18:45.353 ************************************ 00:18:45.353 13:16:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76755 00:18:45.353 13:16:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:45.353 13:16:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.353 13:16:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76755 00:18:45.353 13:16:36 ublk -- common/autotest_common.sh@835 -- # '[' -z 76755 ']' 00:18:45.353 13:16:36 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.353 13:16:36 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.353 13:16:36 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.353 13:16:36 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.353 13:16:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:45.612 [2024-12-11 13:16:36.936674] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:18:45.612 [2024-12-11 13:16:36.936844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76755 ] 00:18:45.612 [2024-12-11 13:16:37.124753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.871 [2024-12-11 13:16:37.266092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.871 [2024-12-11 13:16:37.266166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.822 13:16:38 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.822 13:16:38 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:46.822 13:16:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:46.822 13:16:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:46.822 13:16:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.822 13:16:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:46.822 ************************************ 00:18:46.822 START TEST test_create_ublk 00:18:46.822 ************************************ 00:18:46.822 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:46.822 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:46.822 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.822 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:46.822 [2024-12-11 13:16:38.335139] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:46.822 [2024-12-11 13:16:38.338179] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:46.822 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.822 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:46.822 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:46.822 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.822 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:47.434 [2024-12-11 13:16:38.689332] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:47.434 [2024-12-11 13:16:38.689843] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:47.434 [2024-12-11 13:16:38.689866] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:47.434 [2024-12-11 13:16:38.689876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:47.434 [2024-12-11 13:16:38.697175] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:47.434 [2024-12-11 13:16:38.697201] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:47.434 [2024-12-11 13:16:38.705158] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:47.434 [2024-12-11 13:16:38.705806] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:47.434 [2024-12-11 13:16:38.721151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:47.434 13:16:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:47.434 { 00:18:47.434 "ublk_device": "/dev/ublkb0", 00:18:47.434 "id": 0, 00:18:47.434 "queue_depth": 512, 00:18:47.434 "num_queues": 4, 00:18:47.434 "bdev_name": "Malloc0" 00:18:47.434 } 00:18:47.434 ]' 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:47.434 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:47.435 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:47.435 13:16:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:47.435 13:16:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:47.694 fio: verification read phase will never start because write phase uses all of runtime 00:18:47.694 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:47.694 fio-3.35 00:18:47.694 Starting 1 process 00:18:57.673 00:18:57.673 fio_test: (groupid=0, jobs=1): err= 0: pid=76813: Wed Dec 11 13:16:49 2024 00:18:57.673 write: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(598MiB/10001msec); 0 zone resets 00:18:57.673 clat (usec): min=40, max=4290, avg=64.52, stdev=104.14 00:18:57.673 lat (usec): min=40, max=4366, avg=64.98, stdev=104.17 00:18:57.673 clat percentiles (usec): 00:18:57.673 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 56], 00:18:57.673 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:18:57.673 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 72], 00:18:57.673 | 99.00th=[ 82], 99.50th=[ 89], 99.90th=[ 2147], 99.95th=[ 2966], 00:18:57.673 | 99.99th=[ 3687] 00:18:57.673 bw ( KiB/s): min=54224, max=65392, per=100.00%, avg=61380.63, stdev=3652.12, samples=19 00:18:57.673 iops : min=13556, max=16348, avg=15345.16, stdev=913.03, samples=19 00:18:57.673 lat (usec) : 50=6.17%, 100=93.55%, 250=0.06%, 500=0.01%, 750=0.01% 00:18:57.673 lat (usec) : 1000=0.02% 00:18:57.673 lat (msec) : 2=0.07%, 4=0.11%, 10=0.01% 00:18:57.673 cpu : usr=3.10%, sys=10.59%, ctx=153018, majf=0, minf=794 00:18:57.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.673 issued rwts: total=0,153018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.673 00:18:57.673 Run status group 0 (all jobs): 00:18:57.673 WRITE: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=598MiB (627MB), run=10001-10001msec 00:18:57.673 00:18:57.673 Disk stats (read/write): 00:18:57.674 ublkb0: ios=0/151461, merge=0/0, ticks=0/8551, in_queue=8551, util=98.99% 00:18:57.933 13:16:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:57.933 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.933 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:57.933 [2024-12-11 13:16:49.250668] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:57.933 [2024-12-11 13:16:49.290203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:57.933 [2024-12-11 13:16:49.290947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:57.933 [2024-12-11 13:16:49.300215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:57.933 [2024-12-11 13:16:49.300524] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:57.933 [2024-12-11 13:16:49.300545] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:57.933 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.933 13:16:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:57.933 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:57.934 [2024-12-11 13:16:49.323244] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:57.934 request: 00:18:57.934 { 00:18:57.934 "ublk_id": 0, 00:18:57.934 "method": "ublk_stop_disk", 00:18:57.934 "req_id": 1 00:18:57.934 } 00:18:57.934 Got JSON-RPC error response 00:18:57.934 response: 00:18:57.934 { 00:18:57.934 "code": -19, 00:18:57.934 "message": "No such device" 00:18:57.934 } 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.934 13:16:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:57.934 [2024-12-11 13:16:49.338229] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:57.934 [2024-12-11 13:16:49.346158] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:57.934 [2024-12-11 13:16:49.346200] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.934 13:16:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.934 13:16:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.871 13:16:50 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:58.871 13:16:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:58.871 00:18:58.871 real 0m11.950s 00:18:58.871 user 0m0.715s 00:18:58.871 sys 0m1.206s 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.871 13:16:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:58.871 ************************************ 00:18:58.871 END TEST test_create_ublk 00:18:58.871 ************************************ 00:18:58.871 13:16:50 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:58.871 13:16:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.871 13:16:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.871 13:16:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:58.871 ************************************ 00:18:58.871 START TEST test_create_multi_ublk 00:18:58.871 ************************************ 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:58.871 [2024-12-11 13:16:50.360134] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:58.871 [2024-12-11 13:16:50.363379] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.871 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.131 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.131 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:59.131 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:59.131 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.131 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.131 [2024-12-11 13:16:50.686316] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:59.131 [2024-12-11 13:16:50.686835] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:59.131 [2024-12-11 13:16:50.686854] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:59.131 [2024-12-11 13:16:50.686869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:59.131 [2024-12-11 13:16:50.694556] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:59.131 [2024-12-11 13:16:50.694587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:59.390 [2024-12-11 13:16:50.701152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:59.390 [2024-12-11 13:16:50.701834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:59.390 [2024-12-11 13:16:50.715237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:59.390 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.390 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:59.390 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:59.390 13:16:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:59.390 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.390 13:16:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.649 [2024-12-11 13:16:51.069297] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:59.649 [2024-12-11 13:16:51.069805] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:59.649 [2024-12-11 13:16:51.069828] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:59.649 [2024-12-11 13:16:51.069836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:59.649 [2024-12-11 13:16:51.077176] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:59.649 [2024-12-11 13:16:51.077201] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:59.649 [2024-12-11 13:16:51.085160] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:59.649 [2024-12-11 13:16:51.085827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:59.649 [2024-12-11 13:16:51.095169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.649 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.908 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.908 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:59.908 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:59.908 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.908 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.908 [2024-12-11 13:16:51.438288] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:59.908 [2024-12-11 13:16:51.438805] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:59.908 [2024-12-11 13:16:51.438823] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:59.909 [2024-12-11 13:16:51.438835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:59.909 [2024-12-11 13:16:51.446173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:59.909 [2024-12-11 13:16:51.446205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:59.909 [2024-12-11 13:16:51.454161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:59.909 [2024-12-11 13:16:51.454802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:59.909 [2024-12-11 13:16:51.463187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:59.909 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.909 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:59.909 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:00.168 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:00.168 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.168 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.427 [2024-12-11 13:16:51.806345] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:00.427 [2024-12-11 13:16:51.806841] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:00.427 [2024-12-11 13:16:51.806863] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:00.427 [2024-12-11 13:16:51.806872] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:00.427 [2024-12-11 13:16:51.815599] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:00.427 [2024-12-11 13:16:51.815625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:00.427 [2024-12-11 13:16:51.822162] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:00.427 [2024-12-11 13:16:51.822818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:00.427 [2024-12-11 13:16:51.827734] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:00.427 { 00:19:00.427 "ublk_device": "/dev/ublkb0", 00:19:00.427 "id": 0, 00:19:00.427 "queue_depth": 512, 00:19:00.427 "num_queues": 4, 00:19:00.427 "bdev_name": "Malloc0" 00:19:00.427 }, 00:19:00.427 { 00:19:00.427 "ublk_device": "/dev/ublkb1", 00:19:00.427 "id": 1, 00:19:00.427 "queue_depth": 512, 00:19:00.427 "num_queues": 4, 00:19:00.427 "bdev_name": "Malloc1" 00:19:00.427 }, 00:19:00.427 { 00:19:00.427 "ublk_device": "/dev/ublkb2", 00:19:00.427 "id": 2, 00:19:00.427 "queue_depth": 512, 00:19:00.427 "num_queues": 4, 00:19:00.427 "bdev_name": "Malloc2" 00:19:00.427 }, 00:19:00.427 { 00:19:00.427 "ublk_device": "/dev/ublkb3", 00:19:00.427 "id": 3, 00:19:00.427 "queue_depth": 512, 00:19:00.427 "num_queues": 4, 00:19:00.427 "bdev_name": "Malloc3" 00:19:00.427 } 00:19:00.427 ]' 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:00.427 13:16:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:00.686 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:00.687 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:00.946 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.205 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.205 [2024-12-11 13:16:52.744308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:01.464 [2024-12-11 13:16:52.784195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:01.464 [2024-12-11 13:16:52.785147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:01.464 [2024-12-11 13:16:52.793183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:01.464 [2024-12-11 13:16:52.793506] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:01.464 [2024-12-11 13:16:52.793528] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:01.464 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.464 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:01.464 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:01.464 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.464 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.464 [2024-12-11 13:16:52.808281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:01.464 [2024-12-11 13:16:52.845713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:01.465 [2024-12-11 13:16:52.846651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:01.465 [2024-12-11 13:16:52.847714] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:01.465 [2024-12-11 13:16:52.848001] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:01.465 [2024-12-11 13:16:52.848021] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.465 [2024-12-11 13:16:52.865282] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:01.465 [2024-12-11 13:16:52.901733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:01.465 [2024-12-11 13:16:52.902626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:01.465 [2024-12-11 13:16:52.909163] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:01.465 [2024-12-11 13:16:52.909469] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:01.465 [2024-12-11 13:16:52.909488] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.465 [2024-12-11 13:16:52.923264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:01.465 [2024-12-11 13:16:52.965704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:01.465 [2024-12-11 13:16:52.966487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:01.465 [2024-12-11 13:16:52.976161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:01.465 [2024-12-11 13:16:52.976465] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:01.465 [2024-12-11 13:16:52.976479] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.465 13:16:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:01.724 [2024-12-11 13:16:53.189269] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:01.724 [2024-12-11 13:16:53.197138] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:01.724 [2024-12-11 13:16:53.197191] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:01.724 13:16:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:01.724 13:16:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:01.724 13:16:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:01.724 13:16:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.724 13:16:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:02.662 13:16:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.662 13:16:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:02.662 13:16:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:02.662 13:16:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.662 13:16:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:02.922 13:16:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.922 13:16:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:02.922 13:16:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:02.922 13:16:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.922 13:16:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.490 13:16:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.490 13:16:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:03.491 13:16:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:03.491 13:16:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.491 13:16:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:03.748 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:04.006 13:16:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:04.006 00:19:04.006 real 0m4.974s 00:19:04.006 user 0m1.035s 00:19:04.006 sys 0m0.246s 00:19:04.006 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.006 13:16:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:04.007 ************************************ 00:19:04.007 END TEST test_create_multi_ublk 00:19:04.007 ************************************ 00:19:04.007 13:16:55 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:04.007 13:16:55 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:04.007 13:16:55 ublk -- ublk/ublk.sh@130 -- # killprocess 76755 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@954 -- # '[' -z 76755 ']' 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@958 -- # kill -0 76755 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@959 -- # uname 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76755 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.007 killing process with pid 76755 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76755' 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@973 -- # kill 76755 00:19:04.007 13:16:55 ublk -- common/autotest_common.sh@978 -- # wait 76755 00:19:05.386 [2024-12-11 13:16:56.734495] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:05.386 [2024-12-11 13:16:56.734606] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:06.766 00:19:06.766 real 0m33.252s 00:19:06.766 user 0m46.000s 00:19:06.766 sys 0m12.131s 00:19:06.766 13:16:58 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.766 13:16:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:06.766 ************************************ 00:19:06.766 END TEST ublk 00:19:06.766 ************************************ 00:19:06.766 13:16:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:06.766 13:16:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.766 13:16:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.766 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:19:06.766 ************************************ 00:19:06.766 START TEST ublk_recovery 00:19:06.766 ************************************ 00:19:06.766 13:16:58 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:07.026 * Looking for test storage... 00:19:07.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.026 13:16:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.026 --rc genhtml_branch_coverage=1 00:19:07.026 --rc genhtml_function_coverage=1 00:19:07.026 --rc genhtml_legend=1 00:19:07.026 --rc geninfo_all_blocks=1 00:19:07.026 --rc geninfo_unexecuted_blocks=1 00:19:07.026 00:19:07.026 ' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.026 --rc genhtml_branch_coverage=1 00:19:07.026 --rc genhtml_function_coverage=1 00:19:07.026 --rc genhtml_legend=1 00:19:07.026 --rc geninfo_all_blocks=1 00:19:07.026 --rc geninfo_unexecuted_blocks=1 00:19:07.026 00:19:07.026 ' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.026 --rc genhtml_branch_coverage=1 00:19:07.026 --rc genhtml_function_coverage=1 00:19:07.026 --rc genhtml_legend=1 00:19:07.026 --rc geninfo_all_blocks=1 00:19:07.026 --rc geninfo_unexecuted_blocks=1 00:19:07.026 00:19:07.026 ' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.026 --rc genhtml_branch_coverage=1 00:19:07.026 --rc genhtml_function_coverage=1 00:19:07.026 --rc genhtml_legend=1 00:19:07.026 --rc geninfo_all_blocks=1 00:19:07.026 --rc geninfo_unexecuted_blocks=1 00:19:07.026 00:19:07.026 ' 00:19:07.026 13:16:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:07.026 13:16:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:07.026 13:16:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:07.026 13:16:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77194 00:19:07.026 13:16:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:07.026 13:16:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.026 13:16:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77194 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 77194 ']' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.026 13:16:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.026 [2024-12-11 13:16:58.583535] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:19:07.026 [2024-12-11 13:16:58.583709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77194 ] 00:19:07.285 [2024-12-11 13:16:58.774307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:07.544 [2024-12-11 13:16:58.924000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.544 [2024-12-11 13:16:58.924039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.480 13:16:59 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.480 13:16:59 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:08.480 13:16:59 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:08.480 13:16:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.480 13:16:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.480 [2024-12-11 13:16:59.998139] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:08.480 [2024-12-11 13:17:00.005143] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:08.480 13:17:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.480 13:17:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:08.480 13:17:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.480 13:17:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.740 malloc0 00:19:08.740 13:17:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.740 13:17:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:08.740 13:17:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.740 13:17:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.740 [2024-12-11 13:17:00.181362] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:08.740 [2024-12-11 13:17:00.181493] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:08.740 [2024-12-11 13:17:00.181508] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:08.740 [2024-12-11 13:17:00.181524] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:08.740 [2024-12-11 13:17:00.189169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:08.740 [2024-12-11 13:17:00.189193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:08.740 [2024-12-11 13:17:00.197165] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:08.740 [2024-12-11 13:17:00.197335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:08.740 [2024-12-11 13:17:00.220156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:08.740 1 00:19:08.740 13:17:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.740 13:17:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:09.679 13:17:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77235 00:19:09.679 13:17:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:09.679 13:17:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:09.938 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.938 fio-3.35 00:19:09.938 Starting 1 process 00:19:15.210 13:17:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77194 00:19:15.210 13:17:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:20.478 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77194 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:20.478 13:17:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77346 00:19:20.478 13:17:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:20.478 13:17:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.478 13:17:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77346 00:19:20.478 13:17:11 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 77346 ']' 00:19:20.478 13:17:11 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.478 13:17:11 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.478 13:17:11 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.478 13:17:11 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.478 13:17:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.478 [2024-12-11 13:17:11.367284] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:19:20.478 [2024-12-11 13:17:11.367436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77346 ] 00:19:20.478 [2024-12-11 13:17:11.553885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.478 [2024-12-11 13:17:11.706768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.478 [2024-12-11 13:17:11.706807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:21.445 13:17:12 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 [2024-12-11 13:17:12.761144] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:21.445 [2024-12-11 13:17:12.764148] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.445 13:17:12 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 malloc0 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.445 13:17:12 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 [2024-12-11 13:17:12.921334] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:21.445 [2024-12-11 13:17:12.921390] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:21.445 [2024-12-11 13:17:12.921405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:21.445 [2024-12-11 13:17:12.929199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:21.445 [2024-12-11 13:17:12.929232] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:19:21.445 [2024-12-11 13:17:12.929243] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:21.445 1 00:19:21.445 [2024-12-11 13:17:12.929358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:21.445 13:17:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.445 13:17:12 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77235 00:19:21.445 [2024-12-11 13:17:12.937162] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:21.445 [2024-12-11 13:17:12.943722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:21.445 [2024-12-11 13:17:12.951392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:21.445 [2024-12-11 13:17:12.951419] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:17.685 00:20:17.685 fio_test: (groupid=0, jobs=1): err= 0: pid=77238: Wed Dec 11 13:18:01 2024 00:20:17.685 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(4915MiB/60002msec) 00:20:17.685 slat (usec): min=2, max=418, avg= 7.81, stdev= 2.34 00:20:17.685 clat (usec): min=1075, max=6721.5k, avg=3028.38, stdev=49741.93 00:20:17.685 lat (usec): min=1081, max=6721.5k, avg=3036.19, stdev=49741.93 00:20:17.686 clat percentiles (usec): 00:20:17.686 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:20:17.686 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2573], 60.00th=[ 2638], 00:20:17.686 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 3785], 00:20:17.686 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 7046], 99.95th=[ 7963], 00:20:17.686 | 99.99th=[13173] 00:20:17.686 bw ( KiB/s): min= 1472, max=103888, per=100.00%, avg=93342.79, stdev=12368.47, samples=107 00:20:17.686 iops : min= 368, max=25972, avg=23335.69, stdev=3092.12, samples=107 00:20:17.686 write: IOPS=21.0k, BW=81.8MiB/s (85.8MB/s)(4911MiB/60002msec); 0 zone resets 00:20:17.686 slat (usec): min=2, max=276, avg= 7.81, stdev= 2.25 00:20:17.686 clat (usec): min=1053, max=6721.6k, avg=3060.83, stdev=46019.23 00:20:17.686 lat (usec): min=1061, max=6721.7k, avg=3068.63, stdev=46019.23 00:20:17.686 clat percentiles (usec): 00:20:17.686 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:20:17.686 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2737], 00:20:17.686 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2966], 95.00th=[ 3785], 00:20:17.686 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 7177], 99.95th=[ 7963], 00:20:17.686 | 99.99th=[13173] 00:20:17.686 bw ( KiB/s): min= 1624, max=103512, per=100.00%, avg=93259.05, stdev=12193.81, samples=107 00:20:17.686 iops : min= 406, max=25878, avg=23314.75, stdev=3048.45, samples=107 00:20:17.686 lat (msec) : 2=0.58%, 4=95.44%, 10=3.95%, 20=0.02%, >=2000=0.01% 00:20:17.686 cpu : usr=12.14%, sys=33.01%, ctx=112352, majf=0, minf=13 00:20:17.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:17.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.686 issued rwts: total=1258335,1257186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.686 00:20:17.686 Run status group 0 (all jobs): 00:20:17.686 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=4915MiB (5154MB), run=60002-60002msec 00:20:17.686 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=4911MiB (5149MB), run=60002-60002msec 00:20:17.686 00:20:17.686 Disk stats (read/write): 00:20:17.686 ublkb1: ios=1255742/1254717, merge=0/0, ticks=3694771/3598885, in_queue=7293657, util=99.95% 00:20:17.686 13:18:01 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.686 [2024-12-11 13:18:01.516510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:17.686 [2024-12-11 13:18:01.554165] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:17.686 [2024-12-11 13:18:01.554435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:17.686 [2024-12-11 13:18:01.563187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:17.686 [2024-12-11 13:18:01.563351] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:17.686 [2024-12-11 13:18:01.563367] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.686 13:18:01 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.686 [2024-12-11 13:18:01.578290] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:17.686 [2024-12-11 13:18:01.586160] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:17.686 [2024-12-11 13:18:01.586199] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.686 13:18:01 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:17.686 13:18:01 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:17.686 13:18:01 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77346 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 77346 ']' 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 77346 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77346 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.686 killing process with pid 77346 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77346' 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@973 -- # kill 77346 00:20:17.686 13:18:01 ublk_recovery -- common/autotest_common.sh@978 -- # wait 77346 00:20:17.686 [2024-12-11 13:18:03.380930] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:17.686 [2024-12-11 13:18:03.381028] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:17.686 00:20:17.686 real 1m6.705s 00:20:17.686 user 1m51.519s 00:20:17.686 sys 0m37.967s 00:20:17.686 ************************************ 00:20:17.686 END TEST ublk_recovery 00:20:17.686 ************************************ 00:20:17.686 13:18:04 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.686 13:18:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.686 13:18:04 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:17.686 13:18:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:17.686 13:18:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.686 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.686 13:18:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:17.686 13:18:05 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:17.686 13:18:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:17.686 13:18:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.686 13:18:05 -- common/autotest_common.sh@10 -- # set +x 00:20:17.686 ************************************ 00:20:17.686 START TEST ftl 00:20:17.686 ************************************ 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:17.686 * Looking for test storage... 00:20:17.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.686 13:18:05 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.686 13:18:05 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.686 13:18:05 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.686 13:18:05 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.686 13:18:05 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.686 13:18:05 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:17.686 13:18:05 ftl -- scripts/common.sh@345 -- # : 1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.686 13:18:05 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.686 13:18:05 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@353 -- # local d=1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.686 13:18:05 ftl -- scripts/common.sh@355 -- # echo 1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.686 13:18:05 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@353 -- # local d=2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.686 13:18:05 ftl -- scripts/common.sh@355 -- # echo 2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.686 13:18:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.686 13:18:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.686 13:18:05 ftl -- scripts/common.sh@368 -- # return 0 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:17.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.686 --rc genhtml_branch_coverage=1 00:20:17.686 --rc genhtml_function_coverage=1 00:20:17.686 --rc genhtml_legend=1 00:20:17.686 --rc geninfo_all_blocks=1 00:20:17.686 --rc geninfo_unexecuted_blocks=1 00:20:17.686 00:20:17.686 ' 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:17.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.686 --rc genhtml_branch_coverage=1 00:20:17.686 --rc genhtml_function_coverage=1 00:20:17.686 --rc genhtml_legend=1 00:20:17.686 --rc geninfo_all_blocks=1 00:20:17.686 --rc geninfo_unexecuted_blocks=1 00:20:17.686 00:20:17.686 ' 00:20:17.686 13:18:05 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:17.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.686 --rc genhtml_branch_coverage=1 00:20:17.686 --rc genhtml_function_coverage=1 00:20:17.686 --rc genhtml_legend=1 00:20:17.686 --rc geninfo_all_blocks=1 00:20:17.687 --rc geninfo_unexecuted_blocks=1 00:20:17.687 00:20:17.687 ' 00:20:17.687 13:18:05 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:17.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.687 --rc genhtml_branch_coverage=1 00:20:17.687 --rc genhtml_function_coverage=1 00:20:17.687 --rc genhtml_legend=1 00:20:17.687 --rc geninfo_all_blocks=1 00:20:17.687 --rc geninfo_unexecuted_blocks=1 00:20:17.687 00:20:17.687 ' 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:17.687 13:18:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:17.687 13:18:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:17.687 13:18:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:17.687 13:18:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:17.687 13:18:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:17.687 13:18:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:17.687 13:18:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:17.687 13:18:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:17.687 13:18:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:17.687 13:18:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:17.687 13:18:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:17.687 13:18:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:17.687 13:18:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:17.687 13:18:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:17.687 13:18:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:17.687 13:18:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:17.687 13:18:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:17.687 13:18:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:17.687 13:18:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:17.687 13:18:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:17.687 13:18:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:17.687 13:18:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:17.687 13:18:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:17.687 13:18:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:17.687 13:18:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:17.687 13:18:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:17.687 13:18:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:17.687 13:18:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:17.687 13:18:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:17.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:17.687 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:17.687 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:17.687 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:17.687 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:17.687 13:18:06 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:17.687 13:18:06 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78152 00:20:17.687 13:18:06 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78152 00:20:17.687 13:18:06 ftl -- common/autotest_common.sh@835 -- # '[' -z 78152 ']' 00:20:17.687 13:18:06 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.687 13:18:06 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.687 13:18:06 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.687 13:18:06 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.687 13:18:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:17.687 [2024-12-11 13:18:06.311093] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:20:17.687 [2024-12-11 13:18:06.311260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78152 ] 00:20:17.687 [2024-12-11 13:18:06.495147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.687 [2024-12-11 13:18:06.624825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.687 13:18:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.687 13:18:07 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:17.687 13:18:07 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:17.687 13:18:07 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:17.687 13:18:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:17.687 13:18:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:17.687 13:18:08 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:17.687 13:18:08 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:17.687 13:18:08 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@50 -- # break 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:17.687 13:18:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:17.947 13:18:09 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:17.947 13:18:09 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:17.947 13:18:09 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:17.947 13:18:09 ftl -- ftl/ftl.sh@63 -- # break 00:20:17.947 13:18:09 ftl -- ftl/ftl.sh@66 -- # killprocess 78152 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 78152 ']' 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@958 -- # kill -0 78152 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@959 -- # uname 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78152 00:20:17.947 killing process with pid 78152 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78152' 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@973 -- # kill 78152 00:20:17.947 13:18:09 ftl -- common/autotest_common.sh@978 -- # wait 78152 00:20:20.485 13:18:11 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:20.485 13:18:11 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:20.485 13:18:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:20.485 13:18:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.485 13:18:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:20.485 ************************************ 00:20:20.485 START TEST ftl_fio_basic 00:20:20.485 ************************************ 00:20:20.485 13:18:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:20.745 * Looking for test storage... 00:20:20.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.745 --rc genhtml_branch_coverage=1 00:20:20.745 --rc genhtml_function_coverage=1 00:20:20.745 --rc genhtml_legend=1 00:20:20.745 --rc geninfo_all_blocks=1 00:20:20.745 --rc geninfo_unexecuted_blocks=1 00:20:20.745 00:20:20.745 ' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.745 --rc genhtml_branch_coverage=1 00:20:20.745 --rc genhtml_function_coverage=1 00:20:20.745 --rc genhtml_legend=1 00:20:20.745 --rc geninfo_all_blocks=1 00:20:20.745 --rc geninfo_unexecuted_blocks=1 00:20:20.745 00:20:20.745 ' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.745 --rc genhtml_branch_coverage=1 00:20:20.745 --rc genhtml_function_coverage=1 00:20:20.745 --rc genhtml_legend=1 00:20:20.745 --rc geninfo_all_blocks=1 00:20:20.745 --rc geninfo_unexecuted_blocks=1 00:20:20.745 00:20:20.745 ' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.745 --rc genhtml_branch_coverage=1 00:20:20.745 --rc genhtml_function_coverage=1 00:20:20.745 --rc genhtml_legend=1 00:20:20.745 --rc geninfo_all_blocks=1 00:20:20.745 --rc geninfo_unexecuted_blocks=1 00:20:20.745 00:20:20.745 ' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:20.745 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78301 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78301 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 78301 ']' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.746 13:18:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:21.005 [2024-12-11 13:18:12.377204] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:20:21.005 [2024-12-11 13:18:12.377570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78301 ] 00:20:21.005 [2024-12-11 13:18:12.564736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:21.265 [2024-12-11 13:18:12.707870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.265 [2024-12-11 13:18:12.709935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.265 [2024-12-11 13:18:12.709940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:22.203 13:18:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:22.463 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:22.722 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:22.722 { 00:20:22.722 "name": "nvme0n1", 00:20:22.722 "aliases": [ 00:20:22.722 "82e2f05d-020c-4aa4-a227-2a7160617676" 00:20:22.722 ], 00:20:22.722 "product_name": "NVMe disk", 00:20:22.722 "block_size": 4096, 00:20:22.722 "num_blocks": 1310720, 00:20:22.722 "uuid": "82e2f05d-020c-4aa4-a227-2a7160617676", 00:20:22.722 "numa_id": -1, 00:20:22.722 "assigned_rate_limits": { 00:20:22.722 "rw_ios_per_sec": 0, 00:20:22.722 "rw_mbytes_per_sec": 0, 00:20:22.722 "r_mbytes_per_sec": 0, 00:20:22.722 "w_mbytes_per_sec": 0 00:20:22.722 }, 00:20:22.722 "claimed": false, 00:20:22.722 "zoned": false, 00:20:22.722 "supported_io_types": { 00:20:22.722 "read": true, 00:20:22.722 "write": true, 00:20:22.722 "unmap": true, 00:20:22.722 "flush": true, 00:20:22.722 "reset": true, 00:20:22.722 "nvme_admin": true, 00:20:22.722 "nvme_io": true, 00:20:22.722 "nvme_io_md": false, 00:20:22.722 "write_zeroes": true, 00:20:22.722 "zcopy": false, 00:20:22.722 "get_zone_info": false, 00:20:22.722 "zone_management": false, 00:20:22.722 "zone_append": false, 00:20:22.722 "compare": true, 00:20:22.722 "compare_and_write": false, 00:20:22.722 "abort": true, 00:20:22.722 "seek_hole": false, 00:20:22.722 "seek_data": false, 00:20:22.722 "copy": true, 00:20:22.722 "nvme_iov_md": false 00:20:22.722 }, 00:20:22.722 "driver_specific": { 00:20:22.722 "nvme": [ 00:20:22.722 { 00:20:22.722 "pci_address": "0000:00:11.0", 00:20:22.722 "trid": { 00:20:22.722 "trtype": "PCIe", 00:20:22.722 "traddr": "0000:00:11.0" 00:20:22.722 }, 00:20:22.722 "ctrlr_data": { 00:20:22.722 "cntlid": 0, 00:20:22.722 "vendor_id": "0x1b36", 00:20:22.722 "model_number": "QEMU NVMe Ctrl", 00:20:22.722 "serial_number": "12341", 00:20:22.722 "firmware_revision": "8.0.0", 00:20:22.722 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:22.722 "oacs": { 00:20:22.722 "security": 0, 00:20:22.722 "format": 1, 00:20:22.722 "firmware": 0, 00:20:22.722 "ns_manage": 1 00:20:22.722 }, 00:20:22.722 "multi_ctrlr": false, 00:20:22.722 "ana_reporting": false 00:20:22.722 }, 00:20:22.722 "vs": { 00:20:22.722 "nvme_version": "1.4" 00:20:22.722 }, 00:20:22.722 "ns_data": { 00:20:22.722 "id": 1, 00:20:22.722 "can_share": false 00:20:22.722 } 00:20:22.722 } 00:20:22.722 ], 00:20:22.722 "mp_policy": "active_passive" 00:20:22.722 } 00:20:22.722 } 00:20:22.722 ]' 00:20:22.722 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:22.722 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:22.722 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:22.982 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:23.241 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7dc7797c-a41c-4001-89a4-41c6eb48721c 00:20:23.241 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7dc7797c-a41c-4001-89a4-41c6eb48721c 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:23.501 13:18:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:23.766 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:23.766 { 00:20:23.766 "name": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:23.766 "aliases": [ 00:20:23.766 "lvs/nvme0n1p0" 00:20:23.766 ], 00:20:23.766 "product_name": "Logical Volume", 00:20:23.766 "block_size": 4096, 00:20:23.766 "num_blocks": 26476544, 00:20:23.766 "uuid": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:23.766 "assigned_rate_limits": { 00:20:23.766 "rw_ios_per_sec": 0, 00:20:23.766 "rw_mbytes_per_sec": 0, 00:20:23.766 "r_mbytes_per_sec": 0, 00:20:23.766 "w_mbytes_per_sec": 0 00:20:23.766 }, 00:20:23.766 "claimed": false, 00:20:23.766 "zoned": false, 00:20:23.766 "supported_io_types": { 00:20:23.766 "read": true, 00:20:23.766 "write": true, 00:20:23.766 "unmap": true, 00:20:23.766 "flush": false, 00:20:23.766 "reset": true, 00:20:23.766 "nvme_admin": false, 00:20:23.766 "nvme_io": false, 00:20:23.766 "nvme_io_md": false, 00:20:23.766 "write_zeroes": true, 00:20:23.766 "zcopy": false, 00:20:23.766 "get_zone_info": false, 00:20:23.766 "zone_management": false, 00:20:23.766 "zone_append": false, 00:20:23.766 "compare": false, 00:20:23.766 "compare_and_write": false, 00:20:23.766 "abort": false, 00:20:23.766 "seek_hole": true, 00:20:23.766 "seek_data": true, 00:20:23.766 "copy": false, 00:20:23.767 "nvme_iov_md": false 00:20:23.767 }, 00:20:23.767 "driver_specific": { 00:20:23.767 "lvol": { 00:20:23.767 "lvol_store_uuid": "7dc7797c-a41c-4001-89a4-41c6eb48721c", 00:20:23.767 "base_bdev": "nvme0n1", 00:20:23.767 "thin_provision": true, 00:20:23.767 "num_allocated_clusters": 0, 00:20:23.767 "snapshot": false, 00:20:23.767 "clone": false, 00:20:23.767 "esnap_clone": false 00:20:23.767 } 00:20:23.767 } 00:20:23.767 } 00:20:23.767 ]' 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:23.767 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:24.026 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:24.286 { 00:20:24.286 "name": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:24.286 "aliases": [ 00:20:24.286 "lvs/nvme0n1p0" 00:20:24.286 ], 00:20:24.286 "product_name": "Logical Volume", 00:20:24.286 "block_size": 4096, 00:20:24.286 "num_blocks": 26476544, 00:20:24.286 "uuid": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:24.286 "assigned_rate_limits": { 00:20:24.286 "rw_ios_per_sec": 0, 00:20:24.286 "rw_mbytes_per_sec": 0, 00:20:24.286 "r_mbytes_per_sec": 0, 00:20:24.286 "w_mbytes_per_sec": 0 00:20:24.286 }, 00:20:24.286 "claimed": false, 00:20:24.286 "zoned": false, 00:20:24.286 "supported_io_types": { 00:20:24.286 "read": true, 00:20:24.286 "write": true, 00:20:24.286 "unmap": true, 00:20:24.286 "flush": false, 00:20:24.286 "reset": true, 00:20:24.286 "nvme_admin": false, 00:20:24.286 "nvme_io": false, 00:20:24.286 "nvme_io_md": false, 00:20:24.286 "write_zeroes": true, 00:20:24.286 "zcopy": false, 00:20:24.286 "get_zone_info": false, 00:20:24.286 "zone_management": false, 00:20:24.286 "zone_append": false, 00:20:24.286 "compare": false, 00:20:24.286 "compare_and_write": false, 00:20:24.286 "abort": false, 00:20:24.286 "seek_hole": true, 00:20:24.286 "seek_data": true, 00:20:24.286 "copy": false, 00:20:24.286 "nvme_iov_md": false 00:20:24.286 }, 00:20:24.286 "driver_specific": { 00:20:24.286 "lvol": { 00:20:24.286 "lvol_store_uuid": "7dc7797c-a41c-4001-89a4-41c6eb48721c", 00:20:24.286 "base_bdev": "nvme0n1", 00:20:24.286 "thin_provision": true, 00:20:24.286 "num_allocated_clusters": 0, 00:20:24.286 "snapshot": false, 00:20:24.286 "clone": false, 00:20:24.286 "esnap_clone": false 00:20:24.286 } 00:20:24.286 } 00:20:24.286 } 00:20:24.286 ]' 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:24.286 13:18:15 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:24.545 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:24.545 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e4796f6-7125-4f0d-8641-adc2cf5df015 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:24.805 { 00:20:24.805 "name": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:24.805 "aliases": [ 00:20:24.805 "lvs/nvme0n1p0" 00:20:24.805 ], 00:20:24.805 "product_name": "Logical Volume", 00:20:24.805 "block_size": 4096, 00:20:24.805 "num_blocks": 26476544, 00:20:24.805 "uuid": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:24.805 "assigned_rate_limits": { 00:20:24.805 "rw_ios_per_sec": 0, 00:20:24.805 "rw_mbytes_per_sec": 0, 00:20:24.805 "r_mbytes_per_sec": 0, 00:20:24.805 "w_mbytes_per_sec": 0 00:20:24.805 }, 00:20:24.805 "claimed": false, 00:20:24.805 "zoned": false, 00:20:24.805 "supported_io_types": { 00:20:24.805 "read": true, 00:20:24.805 "write": true, 00:20:24.805 "unmap": true, 00:20:24.805 "flush": false, 00:20:24.805 "reset": true, 00:20:24.805 "nvme_admin": false, 00:20:24.805 "nvme_io": false, 00:20:24.805 "nvme_io_md": false, 00:20:24.805 "write_zeroes": true, 00:20:24.805 "zcopy": false, 00:20:24.805 "get_zone_info": false, 00:20:24.805 "zone_management": false, 00:20:24.805 "zone_append": false, 00:20:24.805 "compare": false, 00:20:24.805 "compare_and_write": false, 00:20:24.805 "abort": false, 00:20:24.805 "seek_hole": true, 00:20:24.805 "seek_data": true, 00:20:24.805 "copy": false, 00:20:24.805 "nvme_iov_md": false 00:20:24.805 }, 00:20:24.805 "driver_specific": { 00:20:24.805 "lvol": { 00:20:24.805 "lvol_store_uuid": "7dc7797c-a41c-4001-89a4-41c6eb48721c", 00:20:24.805 "base_bdev": "nvme0n1", 00:20:24.805 "thin_provision": true, 00:20:24.805 "num_allocated_clusters": 0, 00:20:24.805 "snapshot": false, 00:20:24.805 "clone": false, 00:20:24.805 "esnap_clone": false 00:20:24.805 } 00:20:24.805 } 00:20:24.805 } 00:20:24.805 ]' 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:24.805 13:18:16 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7e4796f6-7125-4f0d-8641-adc2cf5df015 -c nvc0n1p0 --l2p_dram_limit 60 00:20:25.065 [2024-12-11 13:18:16.514476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.514714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.065 [2024-12-11 13:18:16.514748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:25.065 [2024-12-11 13:18:16.514761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.514871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.514888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.065 [2024-12-11 13:18:16.514905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:25.065 [2024-12-11 13:18:16.514916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.514971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.065 [2024-12-11 13:18:16.516075] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.065 [2024-12-11 13:18:16.516121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.516133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.065 [2024-12-11 13:18:16.516149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:20:25.065 [2024-12-11 13:18:16.516159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.516258] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 155f3c6b-9fe7-4825-ac23-e36fac574efc 00:20:25.065 [2024-12-11 13:18:16.518814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.518859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:25.065 [2024-12-11 13:18:16.518873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:25.065 [2024-12-11 13:18:16.518886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.533055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.533094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.065 [2024-12-11 13:18:16.533108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.092 ms 00:20:25.065 [2024-12-11 13:18:16.533157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.533311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.533344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.065 [2024-12-11 13:18:16.533356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:25.065 [2024-12-11 13:18:16.533375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.533450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.533466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.065 [2024-12-11 13:18:16.533478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:25.065 [2024-12-11 13:18:16.533492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.533535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.065 [2024-12-11 13:18:16.539531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.539565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.065 [2024-12-11 13:18:16.539580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.016 ms 00:20:25.065 [2024-12-11 13:18:16.539610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.539661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.539672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.065 [2024-12-11 13:18:16.539687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:25.065 [2024-12-11 13:18:16.539697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.539748] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:25.065 [2024-12-11 13:18:16.539917] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.065 [2024-12-11 13:18:16.539942] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.065 [2024-12-11 13:18:16.539956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:25.065 [2024-12-11 13:18:16.539978] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.065 [2024-12-11 13:18:16.539991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.065 [2024-12-11 13:18:16.540006] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:25.065 [2024-12-11 13:18:16.540017] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.065 [2024-12-11 13:18:16.540030] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.065 [2024-12-11 13:18:16.540041] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.065 [2024-12-11 13:18:16.540055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.540068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.065 [2024-12-11 13:18:16.540082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:25.065 [2024-12-11 13:18:16.540093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.540203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.065 [2024-12-11 13:18:16.540216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.065 [2024-12-11 13:18:16.540229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:25.065 [2024-12-11 13:18:16.540255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.065 [2024-12-11 13:18:16.540398] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.065 [2024-12-11 13:18:16.540411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.065 [2024-12-11 13:18:16.540434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.065 [2024-12-11 13:18:16.540445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.065 [2024-12-11 13:18:16.540459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.065 [2024-12-11 13:18:16.540469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.065 [2024-12-11 13:18:16.540483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:25.065 [2024-12-11 13:18:16.540493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.065 [2024-12-11 13:18:16.540506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:25.065 [2024-12-11 13:18:16.540515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.065 [2024-12-11 13:18:16.540527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.065 [2024-12-11 13:18:16.540536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:25.065 [2024-12-11 13:18:16.540548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.066 [2024-12-11 13:18:16.540561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.066 [2024-12-11 13:18:16.540574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:25.066 [2024-12-11 13:18:16.540583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.066 [2024-12-11 13:18:16.540608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:25.066 [2024-12-11 13:18:16.540620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.066 [2024-12-11 13:18:16.540643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.066 [2024-12-11 13:18:16.540664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.066 [2024-12-11 13:18:16.540673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.066 [2024-12-11 13:18:16.540694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.066 [2024-12-11 13:18:16.540706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.066 [2024-12-11 13:18:16.540727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.066 [2024-12-11 13:18:16.540736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.066 [2024-12-11 13:18:16.540757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.066 [2024-12-11 13:18:16.540774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.066 [2024-12-11 13:18:16.540815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.066 [2024-12-11 13:18:16.540824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:25.066 [2024-12-11 13:18:16.540836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.066 [2024-12-11 13:18:16.540846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.066 [2024-12-11 13:18:16.540858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:25.066 [2024-12-11 13:18:16.540867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.066 [2024-12-11 13:18:16.540888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:25.066 [2024-12-11 13:18:16.540900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.066 [2024-12-11 13:18:16.540923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.066 [2024-12-11 13:18:16.540934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.066 [2024-12-11 13:18:16.540947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.066 [2024-12-11 13:18:16.540957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.066 [2024-12-11 13:18:16.540973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.066 [2024-12-11 13:18:16.540982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.066 [2024-12-11 13:18:16.540994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.066 [2024-12-11 13:18:16.541003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.066 [2024-12-11 13:18:16.541016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.066 [2024-12-11 13:18:16.541027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.066 [2024-12-11 13:18:16.541043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:25.066 [2024-12-11 13:18:16.541068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:25.066 [2024-12-11 13:18:16.541079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:25.066 [2024-12-11 13:18:16.541094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:25.066 [2024-12-11 13:18:16.541104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:25.066 [2024-12-11 13:18:16.541130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:25.066 [2024-12-11 13:18:16.541141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:25.066 [2024-12-11 13:18:16.541154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:25.066 [2024-12-11 13:18:16.541165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:25.066 [2024-12-11 13:18:16.541181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:25.066 [2024-12-11 13:18:16.541240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.066 [2024-12-11 13:18:16.541269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.066 [2024-12-11 13:18:16.541301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.066 [2024-12-11 13:18:16.541311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.066 [2024-12-11 13:18:16.541325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.066 [2024-12-11 13:18:16.541337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.066 [2024-12-11 13:18:16.541350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.066 [2024-12-11 13:18:16.541364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:20:25.066 [2024-12-11 13:18:16.541378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.066 [2024-12-11 13:18:16.541474] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:25.066 [2024-12-11 13:18:16.541495] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:29.258 [2024-12-11 13:18:20.517908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.517999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:29.258 [2024-12-11 13:18:20.518018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3982.887 ms 00:20:29.258 [2024-12-11 13:18:20.518034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.564505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.564571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.258 [2024-12-11 13:18:20.564605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.172 ms 00:20:29.258 [2024-12-11 13:18:20.564620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.564772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.564789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.258 [2024-12-11 13:18:20.564801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:29.258 [2024-12-11 13:18:20.564819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.626906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.626961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.258 [2024-12-11 13:18:20.626982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.128 ms 00:20:29.258 [2024-12-11 13:18:20.626995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.627047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.627060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.258 [2024-12-11 13:18:20.627071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.258 [2024-12-11 13:18:20.627084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.627945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.627970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.258 [2024-12-11 13:18:20.627982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:20:29.258 [2024-12-11 13:18:20.628001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.628158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.628176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.258 [2024-12-11 13:18:20.628188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:20:29.258 [2024-12-11 13:18:20.628205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.653657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.653705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.258 [2024-12-11 13:18:20.653721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.458 ms 00:20:29.258 [2024-12-11 13:18:20.653734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.667600] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:29.258 [2024-12-11 13:18:20.693159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.693409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.258 [2024-12-11 13:18:20.693444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.374 ms 00:20:29.258 [2024-12-11 13:18:20.693461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.784693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.784758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:29.258 [2024-12-11 13:18:20.784786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.314 ms 00:20:29.258 [2024-12-11 13:18:20.784798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.785069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.785086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.258 [2024-12-11 13:18:20.785106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:20:29.258 [2024-12-11 13:18:20.785143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.258 [2024-12-11 13:18:20.821266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.258 [2024-12-11 13:18:20.821306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:29.258 [2024-12-11 13:18:20.821324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.097 ms 00:20:29.258 [2024-12-11 13:18:20.821335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:20.857009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:20.857044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:29.518 [2024-12-11 13:18:20.857062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.675 ms 00:20:29.518 [2024-12-11 13:18:20.857088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:20.857917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:20.857947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.518 [2024-12-11 13:18:20.857962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:20:29.518 [2024-12-11 13:18:20.857973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:20.958639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:20.958839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:29.518 [2024-12-11 13:18:20.958875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.750 ms 00:20:29.518 [2024-12-11 13:18:20.958891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:20.998039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:20.998083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:29.518 [2024-12-11 13:18:20.998104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.090 ms 00:20:29.518 [2024-12-11 13:18:20.998131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:21.034114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:21.034161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:29.518 [2024-12-11 13:18:21.034179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.973 ms 00:20:29.518 [2024-12-11 13:18:21.034190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:21.070536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:21.070575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.518 [2024-12-11 13:18:21.070593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.333 ms 00:20:29.518 [2024-12-11 13:18:21.070605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:21.070662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:21.070675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.518 [2024-12-11 13:18:21.070696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:29.518 [2024-12-11 13:18:21.070707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:21.070846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.518 [2024-12-11 13:18:21.070859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.518 [2024-12-11 13:18:21.070874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:29.518 [2024-12-11 13:18:21.070889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.518 [2024-12-11 13:18:21.072343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4564.735 ms, result 0 00:20:29.518 { 00:20:29.518 "name": "ftl0", 00:20:29.518 "uuid": "155f3c6b-9fe7-4825-ac23-e36fac574efc" 00:20:29.518 } 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:29.777 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:30.036 [ 00:20:30.036 { 00:20:30.036 "name": "ftl0", 00:20:30.036 "aliases": [ 00:20:30.036 "155f3c6b-9fe7-4825-ac23-e36fac574efc" 00:20:30.036 ], 00:20:30.036 "product_name": "FTL disk", 00:20:30.036 "block_size": 4096, 00:20:30.036 "num_blocks": 20971520, 00:20:30.036 "uuid": "155f3c6b-9fe7-4825-ac23-e36fac574efc", 00:20:30.036 "assigned_rate_limits": { 00:20:30.036 "rw_ios_per_sec": 0, 00:20:30.036 "rw_mbytes_per_sec": 0, 00:20:30.036 "r_mbytes_per_sec": 0, 00:20:30.036 "w_mbytes_per_sec": 0 00:20:30.036 }, 00:20:30.036 "claimed": false, 00:20:30.036 "zoned": false, 00:20:30.036 "supported_io_types": { 00:20:30.036 "read": true, 00:20:30.036 "write": true, 00:20:30.036 "unmap": true, 00:20:30.036 "flush": true, 00:20:30.036 "reset": false, 00:20:30.036 "nvme_admin": false, 00:20:30.036 "nvme_io": false, 00:20:30.036 "nvme_io_md": false, 00:20:30.036 "write_zeroes": true, 00:20:30.036 "zcopy": false, 00:20:30.036 "get_zone_info": false, 00:20:30.036 "zone_management": false, 00:20:30.036 "zone_append": false, 00:20:30.036 "compare": false, 00:20:30.036 "compare_and_write": false, 00:20:30.036 "abort": false, 00:20:30.036 "seek_hole": false, 00:20:30.036 "seek_data": false, 00:20:30.036 "copy": false, 00:20:30.036 "nvme_iov_md": false 00:20:30.036 }, 00:20:30.036 "driver_specific": { 00:20:30.036 "ftl": { 00:20:30.036 "base_bdev": "7e4796f6-7125-4f0d-8641-adc2cf5df015", 00:20:30.036 "cache": "nvc0n1p0" 00:20:30.036 } 00:20:30.036 } 00:20:30.036 } 00:20:30.036 ] 00:20:30.036 13:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:20:30.036 13:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:30.036 13:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:30.295 13:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:30.295 13:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:30.555 [2024-12-11 13:18:21.943882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:21.944168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:30.555 [2024-12-11 13:18:21.944200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:30.555 [2024-12-11 13:18:21.944217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:21.944287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:30.555 [2024-12-11 13:18:21.949076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:21.949119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:30.555 [2024-12-11 13:18:21.949137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:20:30.555 [2024-12-11 13:18:21.949148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:21.949715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:21.949736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:30.555 [2024-12-11 13:18:21.949751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:20:30.555 [2024-12-11 13:18:21.949761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:21.952315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:21.952342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:30.555 [2024-12-11 13:18:21.952357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:20:30.555 [2024-12-11 13:18:21.952368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:21.957363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:21.957395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:30.555 [2024-12-11 13:18:21.957410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.950 ms 00:20:30.555 [2024-12-11 13:18:21.957436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:21.994071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:21.994111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:30.555 [2024-12-11 13:18:21.994166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.565 ms 00:20:30.555 [2024-12-11 13:18:21.994177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:22.016069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:22.016109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:30.555 [2024-12-11 13:18:22.016158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.848 ms 00:20:30.555 [2024-12-11 13:18:22.016169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:22.016419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:22.016434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:30.555 [2024-12-11 13:18:22.016448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:30.555 [2024-12-11 13:18:22.016459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:22.051448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:22.051482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:30.555 [2024-12-11 13:18:22.051499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.010 ms 00:20:30.555 [2024-12-11 13:18:22.051524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.555 [2024-12-11 13:18:22.086314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.555 [2024-12-11 13:18:22.086349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:30.555 [2024-12-11 13:18:22.086365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.791 ms 00:20:30.555 [2024-12-11 13:18:22.086375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.816 [2024-12-11 13:18:22.122897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.816 [2024-12-11 13:18:22.122946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:30.816 [2024-12-11 13:18:22.122965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.520 ms 00:20:30.816 [2024-12-11 13:18:22.122975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.816 [2024-12-11 13:18:22.158198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.816 [2024-12-11 13:18:22.158233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:30.816 [2024-12-11 13:18:22.158249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.152 ms 00:20:30.816 [2024-12-11 13:18:22.158259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.816 [2024-12-11 13:18:22.158327] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:30.816 [2024-12-11 13:18:22.158348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:30.816 [2024-12-11 13:18:22.158972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.158983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.158996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:30.817 [2024-12-11 13:18:22.159690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:30.817 [2024-12-11 13:18:22.159703] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 155f3c6b-9fe7-4825-ac23-e36fac574efc 00:20:30.817 [2024-12-11 13:18:22.159715] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:30.817 [2024-12-11 13:18:22.159731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:30.817 [2024-12-11 13:18:22.159741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:30.817 [2024-12-11 13:18:22.159758] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:30.817 [2024-12-11 13:18:22.159768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:30.817 [2024-12-11 13:18:22.159782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:30.817 [2024-12-11 13:18:22.159791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:30.817 [2024-12-11 13:18:22.159804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:30.817 [2024-12-11 13:18:22.159813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:30.817 [2024-12-11 13:18:22.159827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.817 [2024-12-11 13:18:22.159837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:30.817 [2024-12-11 13:18:22.159851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:20:30.817 [2024-12-11 13:18:22.159861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.817 [2024-12-11 13:18:22.181095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.817 [2024-12-11 13:18:22.181267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:30.817 [2024-12-11 13:18:22.181351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.197 ms 00:20:30.817 [2024-12-11 13:18:22.181389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.817 [2024-12-11 13:18:22.182096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.817 [2024-12-11 13:18:22.182205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:30.817 [2024-12-11 13:18:22.182283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:20:30.817 [2024-12-11 13:18:22.182374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.817 [2024-12-11 13:18:22.255589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.817 [2024-12-11 13:18:22.255742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.817 [2024-12-11 13:18:22.255823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.817 [2024-12-11 13:18:22.255861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.817 [2024-12-11 13:18:22.256000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.817 [2024-12-11 13:18:22.256083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.817 [2024-12-11 13:18:22.256193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.817 [2024-12-11 13:18:22.256303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.817 [2024-12-11 13:18:22.256487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.817 [2024-12-11 13:18:22.256604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.817 [2024-12-11 13:18:22.256679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.817 [2024-12-11 13:18:22.256767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.817 [2024-12-11 13:18:22.256836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.817 [2024-12-11 13:18:22.256987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.817 [2024-12-11 13:18:22.257028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.817 [2024-12-11 13:18:22.257179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.398500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.398697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.077 [2024-12-11 13:18:22.398788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.398826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.501691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.501898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.077 [2024-12-11 13:18:22.501982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.502018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.502205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.502289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.077 [2024-12-11 13:18:22.502337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.502367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.502545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.502563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.077 [2024-12-11 13:18:22.502579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.502590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.502742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.502756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.077 [2024-12-11 13:18:22.502770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.502784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.502856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.502869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.077 [2024-12-11 13:18:22.502883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.502894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.502958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.502970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.077 [2024-12-11 13:18:22.502984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.502994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.503069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.077 [2024-12-11 13:18:22.503082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.077 [2024-12-11 13:18:22.503096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.077 [2024-12-11 13:18:22.503106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.077 [2024-12-11 13:18:22.503343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.324 ms, result 0 00:20:31.077 true 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78301 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 78301 ']' 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 78301 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78301 00:20:31.077 killing process with pid 78301 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78301' 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 78301 00:20:31.077 13:18:22 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 78301 00:20:35.265 13:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:35.266 13:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:35.266 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:35.266 fio-3.35 00:20:35.266 Starting 1 thread 00:20:41.833 00:20:41.833 test: (groupid=0, jobs=1): err= 0: pid=78524: Wed Dec 11 13:18:32 2024 00:20:41.833 read: IOPS=933, BW=62.0MiB/s (65.0MB/s)(255MiB/4105msec) 00:20:41.833 slat (nsec): min=4598, max=31162, avg=6393.77, stdev=2524.57 00:20:41.833 clat (usec): min=325, max=1455, avg=490.17, stdev=56.00 00:20:41.833 lat (usec): min=330, max=1464, avg=496.56, stdev=56.28 00:20:41.833 clat percentiles (usec): 00:20:41.833 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 412], 20.00th=[ 461], 00:20:41.833 | 30.00th=[ 465], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 519], 00:20:41.833 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 545], 95.00th=[ 553], 00:20:41.833 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 938], 99.95th=[ 1123], 00:20:41.833 | 99.99th=[ 1450] 00:20:41.833 write: IOPS=940, BW=62.4MiB/s (65.5MB/s)(256MiB/4101msec); 0 zone resets 00:20:41.833 slat (usec): min=15, max=148, avg=19.29, stdev= 4.38 00:20:41.833 clat (usec): min=354, max=1373, avg=541.26, stdev=69.85 00:20:41.833 lat (usec): min=391, max=1392, avg=560.55, stdev=70.29 00:20:41.833 clat percentiles (usec): 00:20:41.833 | 1.00th=[ 416], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[ 486], 00:20:41.833 | 30.00th=[ 494], 40.00th=[ 545], 50.00th=[ 545], 60.00th=[ 553], 00:20:41.833 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 627], 00:20:41.833 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 1188], 99.95th=[ 1319], 00:20:41.833 | 99.99th=[ 1369] 00:20:41.833 bw ( KiB/s): min=61744, max=65688, per=100.00%, avg=64022.00, stdev=1531.35, samples=8 00:20:41.833 iops : min= 908, max= 966, avg=941.50, stdev=22.52, samples=8 00:20:41.833 lat (usec) : 500=44.84%, 750=54.05%, 1000=0.99% 00:20:41.833 lat (msec) : 2=0.12% 00:20:41.833 cpu : usr=99.22%, sys=0.12%, ctx=6, majf=0, minf=1167 00:20:41.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.833 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:41.834 00:20:41.834 Run status group 0 (all jobs): 00:20:41.834 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=255MiB (267MB), run=4105-4105msec 00:20:41.834 WRITE: bw=62.4MiB/s (65.5MB/s), 62.4MiB/s-62.4MiB/s (65.5MB/s-65.5MB/s), io=256MiB (269MB), run=4101-4101msec 00:20:43.212 ----------------------------------------------------- 00:20:43.212 Suppressions used: 00:20:43.212 count bytes template 00:20:43.212 1 5 /usr/src/fio/parse.c 00:20:43.212 1 8 libtcmalloc_minimal.so 00:20:43.212 1 904 libcrypto.so 00:20:43.212 ----------------------------------------------------- 00:20:43.212 00:20:43.212 13:18:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:43.212 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.212 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:43.212 13:18:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:43.212 13:18:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:43.212 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:43.213 13:18:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:43.213 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:43.213 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:43.213 fio-3.35 00:20:43.213 Starting 2 threads 00:21:15.302 00:21:15.302 first_half: (groupid=0, jobs=1): err= 0: pid=78637: Wed Dec 11 13:19:01 2024 00:21:15.302 read: IOPS=2543, BW=9.94MiB/s (10.4MB/s)(255MiB/25650msec) 00:21:15.302 slat (nsec): min=3540, max=71073, avg=6786.56, stdev=2674.54 00:21:15.302 clat (usec): min=1064, max=278914, avg=37251.20, stdev=21036.27 00:21:15.302 lat (usec): min=1078, max=278920, avg=37257.98, stdev=21036.66 00:21:15.302 clat percentiles (msec): 00:21:15.302 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:21:15.302 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:21:15.302 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 46], 00:21:15.302 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 239], 99.95th=[ 257], 00:21:15.302 | 99.99th=[ 275] 00:21:15.302 write: IOPS=2990, BW=11.7MiB/s (12.2MB/s)(256MiB/21915msec); 0 zone resets 00:21:15.302 slat (usec): min=4, max=611, avg=10.16, stdev= 7.96 00:21:15.302 clat (usec): min=427, max=103776, avg=12968.55, stdev=21826.59 00:21:15.302 lat (usec): min=445, max=103795, avg=12978.71, stdev=21827.29 00:21:15.302 clat percentiles (usec): 00:21:15.302 | 1.00th=[ 1012], 5.00th=[ 1303], 10.00th=[ 1549], 20.00th=[ 1909], 00:21:15.302 | 30.00th=[ 2769], 40.00th=[ 4752], 50.00th=[ 6194], 60.00th=[ 7177], 00:21:15.302 | 70.00th=[ 8455], 80.00th=[ 11994], 90.00th=[ 34866], 95.00th=[ 81265], 00:21:15.302 | 99.00th=[ 86508], 99.50th=[ 87557], 99.90th=[100140], 99.95th=[101188], 00:21:15.302 | 99.99th=[102237] 00:21:15.302 bw ( KiB/s): min= 528, max=47800, per=78.27%, avg=18724.57, stdev=13439.53, samples=28 00:21:15.302 iops : min= 132, max=11950, avg=4681.14, stdev=3359.88, samples=28 00:21:15.302 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.39% 00:21:15.302 lat (msec) : 2=10.92%, 4=7.16%, 10=20.55%, 20=7.24%, 50=47.08% 00:21:15.302 lat (msec) : 100=5.26%, 250=1.29%, 500=0.03% 00:21:15.302 cpu : usr=99.21%, sys=0.21%, ctx=52, majf=0, minf=5597 00:21:15.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:15.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.302 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.302 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.302 second_half: (groupid=0, jobs=1): err= 0: pid=78638: Wed Dec 11 13:19:01 2024 00:21:15.302 read: IOPS=2555, BW=9.98MiB/s (10.5MB/s)(255MiB/25514msec) 00:21:15.302 slat (usec): min=3, max=122, avg= 8.69, stdev= 4.53 00:21:15.302 clat (usec): min=911, max=292235, avg=37804.79, stdev=19354.33 00:21:15.302 lat (usec): min=927, max=292254, avg=37813.48, stdev=19354.73 00:21:15.302 clat percentiles (msec): 00:21:15.302 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:21:15.302 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:21:15.302 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 49], 00:21:15.302 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 201], 00:21:15.302 | 99.99th=[ 288] 00:21:15.302 write: IOPS=3281, BW=12.8MiB/s (13.4MB/s)(256MiB/19974msec); 0 zone resets 00:21:15.302 slat (usec): min=4, max=477, avg=12.06, stdev= 7.26 00:21:15.302 clat (usec): min=389, max=103608, avg=12183.17, stdev=21383.64 00:21:15.302 lat (usec): min=404, max=103615, avg=12195.23, stdev=21384.27 00:21:15.302 clat percentiles (usec): 00:21:15.302 | 1.00th=[ 1106], 5.00th=[ 1418], 10.00th=[ 1631], 20.00th=[ 1926], 00:21:15.302 | 30.00th=[ 2540], 40.00th=[ 4490], 50.00th=[ 6063], 60.00th=[ 6980], 00:21:15.302 | 70.00th=[ 8160], 80.00th=[ 11338], 90.00th=[ 14222], 95.00th=[ 81265], 00:21:15.302 | 99.00th=[ 86508], 99.50th=[ 88605], 99.90th=[ 99091], 99.95th=[102237], 00:21:15.302 | 99.99th=[102237] 00:21:15.302 bw ( KiB/s): min= 896, max=40400, per=99.61%, avg=23831.27, stdev=11896.75, samples=22 00:21:15.302 iops : min= 224, max=10100, avg=5957.82, stdev=2974.19, samples=22 00:21:15.302 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.17% 00:21:15.302 lat (msec) : 2=11.00%, 4=7.81%, 10=19.76%, 20=8.03%, 50=46.59% 00:21:15.302 lat (msec) : 100=5.24%, 250=1.34%, 500=0.01% 00:21:15.302 cpu : usr=99.15%, sys=0.16%, ctx=37, majf=0, minf=5520 00:21:15.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:15.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.302 complete : 0=0.0%, 4=99.2%, 8=0.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.302 issued rwts: total=65192,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.302 00:21:15.302 Run status group 0 (all jobs): 00:21:15.302 READ: bw=19.9MiB/s (20.8MB/s), 9.94MiB/s-9.98MiB/s (10.4MB/s-10.5MB/s), io=510MiB (534MB), run=25514-25650msec 00:21:15.302 WRITE: bw=23.4MiB/s (24.5MB/s), 11.7MiB/s-12.8MiB/s (12.2MB/s-13.4MB/s), io=512MiB (537MB), run=19974-21915msec 00:21:15.302 ----------------------------------------------------- 00:21:15.302 Suppressions used: 00:21:15.302 count bytes template 00:21:15.302 2 10 /usr/src/fio/parse.c 00:21:15.302 3 288 /usr/src/fio/iolog.c 00:21:15.302 1 8 libtcmalloc_minimal.so 00:21:15.302 1 904 libcrypto.so 00:21:15.302 ----------------------------------------------------- 00:21:15.302 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:15.302 13:19:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:15.302 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:15.302 fio-3.35 00:21:15.302 Starting 1 thread 00:21:30.192 00:21:30.192 test: (groupid=0, jobs=1): err= 0: pid=78973: Wed Dec 11 13:19:20 2024 00:21:30.192 read: IOPS=7444, BW=29.1MiB/s (30.5MB/s)(255MiB/8758msec) 00:21:30.192 slat (nsec): min=3379, max=36377, avg=5659.64, stdev=2164.25 00:21:30.192 clat (usec): min=745, max=34042, avg=17182.79, stdev=930.09 00:21:30.192 lat (usec): min=749, max=34049, avg=17188.45, stdev=930.04 00:21:30.192 clat percentiles (usec): 00:21:30.192 | 1.00th=[15926], 5.00th=[16188], 10.00th=[16450], 20.00th=[16581], 00:21:30.192 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17171], 00:21:30.192 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:21:30.192 | 99.00th=[20055], 99.50th=[20579], 99.90th=[25560], 99.95th=[29754], 00:21:30.192 | 99.99th=[33424] 00:21:30.192 write: IOPS=12.5k, BW=48.8MiB/s (51.2MB/s)(256MiB/5245msec); 0 zone resets 00:21:30.192 slat (usec): min=4, max=712, avg= 8.26, stdev= 7.90 00:21:30.192 clat (usec): min=604, max=71289, avg=10194.89, stdev=12758.81 00:21:30.192 lat (usec): min=612, max=71297, avg=10203.15, stdev=12758.86 00:21:30.192 clat percentiles (usec): 00:21:30.192 | 1.00th=[ 1020], 5.00th=[ 1221], 10.00th=[ 1385], 20.00th=[ 1598], 00:21:30.192 | 30.00th=[ 1795], 40.00th=[ 2245], 50.00th=[ 6456], 60.00th=[ 7767], 00:21:30.192 | 70.00th=[ 8717], 80.00th=[10552], 90.00th=[36963], 95.00th=[38536], 00:21:30.192 | 99.00th=[44827], 99.50th=[53740], 99.90th=[64226], 99.95th=[67634], 00:21:30.192 | 99.99th=[69731] 00:21:30.192 bw ( KiB/s): min=18592, max=67320, per=95.36%, avg=47662.55, stdev=13085.38, samples=11 00:21:30.192 iops : min= 4648, max=16830, avg=11915.64, stdev=3271.35, samples=11 00:21:30.192 lat (usec) : 750=0.01%, 1000=0.39% 00:21:30.193 lat (msec) : 2=18.18%, 4=2.49%, 10=17.54%, 20=52.75%, 50=8.28% 00:21:30.193 lat (msec) : 100=0.35% 00:21:30.193 cpu : usr=98.87%, sys=0.37%, ctx=23, majf=0, minf=5563 00:21:30.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:30.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.193 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:30.193 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:30.193 00:21:30.193 Run status group 0 (all jobs): 00:21:30.193 READ: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=255MiB (267MB), run=8758-8758msec 00:21:30.193 WRITE: bw=48.8MiB/s (51.2MB/s), 48.8MiB/s-48.8MiB/s (51.2MB/s-51.2MB/s), io=256MiB (268MB), run=5245-5245msec 00:21:31.154 ----------------------------------------------------- 00:21:31.154 Suppressions used: 00:21:31.154 count bytes template 00:21:31.154 1 5 /usr/src/fio/parse.c 00:21:31.154 2 192 /usr/src/fio/iolog.c 00:21:31.154 1 8 libtcmalloc_minimal.so 00:21:31.154 1 904 libcrypto.so 00:21:31.154 ----------------------------------------------------- 00:21:31.154 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:31.154 Remove shared memory files 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:31.154 13:19:22 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:31.155 13:19:22 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58985 /dev/shm/spdk_tgt_trace.pid77194 00:21:31.155 13:19:22 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:31.155 13:19:22 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:31.155 ************************************ 00:21:31.155 END TEST ftl_fio_basic 00:21:31.155 ************************************ 00:21:31.155 00:21:31.155 real 1m10.576s 00:21:31.155 user 2m30.976s 00:21:31.155 sys 0m4.669s 00:21:31.155 13:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.155 13:19:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:31.155 13:19:22 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:31.155 13:19:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:31.155 13:19:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.155 13:19:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:31.155 ************************************ 00:21:31.155 START TEST ftl_bdevperf 00:21:31.155 ************************************ 00:21:31.155 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:31.414 * Looking for test storage... 00:21:31.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:31.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.414 --rc genhtml_branch_coverage=1 00:21:31.414 --rc genhtml_function_coverage=1 00:21:31.414 --rc genhtml_legend=1 00:21:31.414 --rc geninfo_all_blocks=1 00:21:31.414 --rc geninfo_unexecuted_blocks=1 00:21:31.414 00:21:31.414 ' 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:31.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.414 --rc genhtml_branch_coverage=1 00:21:31.414 --rc genhtml_function_coverage=1 00:21:31.414 --rc genhtml_legend=1 00:21:31.414 --rc geninfo_all_blocks=1 00:21:31.414 --rc geninfo_unexecuted_blocks=1 00:21:31.414 00:21:31.414 ' 00:21:31.414 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:31.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.415 --rc genhtml_branch_coverage=1 00:21:31.415 --rc genhtml_function_coverage=1 00:21:31.415 --rc genhtml_legend=1 00:21:31.415 --rc geninfo_all_blocks=1 00:21:31.415 --rc geninfo_unexecuted_blocks=1 00:21:31.415 00:21:31.415 ' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:31.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.415 --rc genhtml_branch_coverage=1 00:21:31.415 --rc genhtml_function_coverage=1 00:21:31.415 --rc genhtml_legend=1 00:21:31.415 --rc geninfo_all_blocks=1 00:21:31.415 --rc geninfo_unexecuted_blocks=1 00:21:31.415 00:21:31.415 ' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=79217 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:31.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 79217 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 79217 ']' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.415 13:19:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:31.674 [2024-12-11 13:19:23.016015] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:21:31.674 [2024-12-11 13:19:23.016430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79217 ] 00:21:31.674 [2024-12-11 13:19:23.200542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.933 [2024-12-11 13:19:23.349278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:32.501 13:19:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:32.760 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:33.019 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:33.019 { 00:21:33.019 "name": "nvme0n1", 00:21:33.019 "aliases": [ 00:21:33.019 "f5e3b2eb-0e29-4844-b64e-5a47ed9ccd78" 00:21:33.019 ], 00:21:33.019 "product_name": "NVMe disk", 00:21:33.019 "block_size": 4096, 00:21:33.019 "num_blocks": 1310720, 00:21:33.019 "uuid": "f5e3b2eb-0e29-4844-b64e-5a47ed9ccd78", 00:21:33.019 "numa_id": -1, 00:21:33.019 "assigned_rate_limits": { 00:21:33.019 "rw_ios_per_sec": 0, 00:21:33.019 "rw_mbytes_per_sec": 0, 00:21:33.019 "r_mbytes_per_sec": 0, 00:21:33.019 "w_mbytes_per_sec": 0 00:21:33.019 }, 00:21:33.019 "claimed": true, 00:21:33.019 "claim_type": "read_many_write_one", 00:21:33.019 "zoned": false, 00:21:33.019 "supported_io_types": { 00:21:33.019 "read": true, 00:21:33.019 "write": true, 00:21:33.019 "unmap": true, 00:21:33.019 "flush": true, 00:21:33.019 "reset": true, 00:21:33.019 "nvme_admin": true, 00:21:33.019 "nvme_io": true, 00:21:33.019 "nvme_io_md": false, 00:21:33.019 "write_zeroes": true, 00:21:33.019 "zcopy": false, 00:21:33.019 "get_zone_info": false, 00:21:33.019 "zone_management": false, 00:21:33.019 "zone_append": false, 00:21:33.019 "compare": true, 00:21:33.019 "compare_and_write": false, 00:21:33.019 "abort": true, 00:21:33.019 "seek_hole": false, 00:21:33.019 "seek_data": false, 00:21:33.019 "copy": true, 00:21:33.019 "nvme_iov_md": false 00:21:33.019 }, 00:21:33.019 "driver_specific": { 00:21:33.019 "nvme": [ 00:21:33.019 { 00:21:33.019 "pci_address": "0000:00:11.0", 00:21:33.019 "trid": { 00:21:33.019 "trtype": "PCIe", 00:21:33.019 "traddr": "0000:00:11.0" 00:21:33.019 }, 00:21:33.019 "ctrlr_data": { 00:21:33.019 "cntlid": 0, 00:21:33.019 "vendor_id": "0x1b36", 00:21:33.019 "model_number": "QEMU NVMe Ctrl", 00:21:33.019 "serial_number": "12341", 00:21:33.019 "firmware_revision": "8.0.0", 00:21:33.019 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:33.020 "oacs": { 00:21:33.020 "security": 0, 00:21:33.020 "format": 1, 00:21:33.020 "firmware": 0, 00:21:33.020 "ns_manage": 1 00:21:33.020 }, 00:21:33.020 "multi_ctrlr": false, 00:21:33.020 "ana_reporting": false 00:21:33.020 }, 00:21:33.020 "vs": { 00:21:33.020 "nvme_version": "1.4" 00:21:33.020 }, 00:21:33.020 "ns_data": { 00:21:33.020 "id": 1, 00:21:33.020 "can_share": false 00:21:33.020 } 00:21:33.020 } 00:21:33.020 ], 00:21:33.020 "mp_policy": "active_passive" 00:21:33.020 } 00:21:33.020 } 00:21:33.020 ]' 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:33.020 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:33.279 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7dc7797c-a41c-4001-89a4-41c6eb48721c 00:21:33.279 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:33.279 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7dc7797c-a41c-4001-89a4-41c6eb48721c 00:21:33.538 13:19:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:33.796 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd 00:21:33.796 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.055 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:34.055 { 00:21:34.056 "name": "01420b78-8e80-4194-a236-c5885e96dbdb", 00:21:34.056 "aliases": [ 00:21:34.056 "lvs/nvme0n1p0" 00:21:34.056 ], 00:21:34.056 "product_name": "Logical Volume", 00:21:34.056 "block_size": 4096, 00:21:34.056 "num_blocks": 26476544, 00:21:34.056 "uuid": "01420b78-8e80-4194-a236-c5885e96dbdb", 00:21:34.056 "assigned_rate_limits": { 00:21:34.056 "rw_ios_per_sec": 0, 00:21:34.056 "rw_mbytes_per_sec": 0, 00:21:34.056 "r_mbytes_per_sec": 0, 00:21:34.056 "w_mbytes_per_sec": 0 00:21:34.056 }, 00:21:34.056 "claimed": false, 00:21:34.056 "zoned": false, 00:21:34.056 "supported_io_types": { 00:21:34.056 "read": true, 00:21:34.056 "write": true, 00:21:34.056 "unmap": true, 00:21:34.056 "flush": false, 00:21:34.056 "reset": true, 00:21:34.056 "nvme_admin": false, 00:21:34.056 "nvme_io": false, 00:21:34.056 "nvme_io_md": false, 00:21:34.056 "write_zeroes": true, 00:21:34.056 "zcopy": false, 00:21:34.056 "get_zone_info": false, 00:21:34.056 "zone_management": false, 00:21:34.056 "zone_append": false, 00:21:34.056 "compare": false, 00:21:34.056 "compare_and_write": false, 00:21:34.056 "abort": false, 00:21:34.056 "seek_hole": true, 00:21:34.056 "seek_data": true, 00:21:34.056 "copy": false, 00:21:34.056 "nvme_iov_md": false 00:21:34.056 }, 00:21:34.056 "driver_specific": { 00:21:34.056 "lvol": { 00:21:34.056 "lvol_store_uuid": "cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd", 00:21:34.056 "base_bdev": "nvme0n1", 00:21:34.056 "thin_provision": true, 00:21:34.056 "num_allocated_clusters": 0, 00:21:34.056 "snapshot": false, 00:21:34.056 "clone": false, 00:21:34.056 "esnap_clone": false 00:21:34.056 } 00:21:34.056 } 00:21:34.056 } 00:21:34.056 ]' 00:21:34.056 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:34.318 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:34.579 13:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:34.839 { 00:21:34.839 "name": "01420b78-8e80-4194-a236-c5885e96dbdb", 00:21:34.839 "aliases": [ 00:21:34.839 "lvs/nvme0n1p0" 00:21:34.839 ], 00:21:34.839 "product_name": "Logical Volume", 00:21:34.839 "block_size": 4096, 00:21:34.839 "num_blocks": 26476544, 00:21:34.839 "uuid": "01420b78-8e80-4194-a236-c5885e96dbdb", 00:21:34.839 "assigned_rate_limits": { 00:21:34.839 "rw_ios_per_sec": 0, 00:21:34.839 "rw_mbytes_per_sec": 0, 00:21:34.839 "r_mbytes_per_sec": 0, 00:21:34.839 "w_mbytes_per_sec": 0 00:21:34.839 }, 00:21:34.839 "claimed": false, 00:21:34.839 "zoned": false, 00:21:34.839 "supported_io_types": { 00:21:34.839 "read": true, 00:21:34.839 "write": true, 00:21:34.839 "unmap": true, 00:21:34.839 "flush": false, 00:21:34.839 "reset": true, 00:21:34.839 "nvme_admin": false, 00:21:34.839 "nvme_io": false, 00:21:34.839 "nvme_io_md": false, 00:21:34.839 "write_zeroes": true, 00:21:34.839 "zcopy": false, 00:21:34.839 "get_zone_info": false, 00:21:34.839 "zone_management": false, 00:21:34.839 "zone_append": false, 00:21:34.839 "compare": false, 00:21:34.839 "compare_and_write": false, 00:21:34.839 "abort": false, 00:21:34.839 "seek_hole": true, 00:21:34.839 "seek_data": true, 00:21:34.839 "copy": false, 00:21:34.839 "nvme_iov_md": false 00:21:34.839 }, 00:21:34.839 "driver_specific": { 00:21:34.839 "lvol": { 00:21:34.839 "lvol_store_uuid": "cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd", 00:21:34.839 "base_bdev": "nvme0n1", 00:21:34.839 "thin_provision": true, 00:21:34.839 "num_allocated_clusters": 0, 00:21:34.839 "snapshot": false, 00:21:34.839 "clone": false, 00:21:34.839 "esnap_clone": false 00:21:34.839 } 00:21:34.839 } 00:21:34.839 } 00:21:34.839 ]' 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:34.839 13:19:26 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=01420b78-8e80-4194-a236-c5885e96dbdb 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:35.098 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01420b78-8e80-4194-a236-c5885e96dbdb 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:35.357 { 00:21:35.357 "name": "01420b78-8e80-4194-a236-c5885e96dbdb", 00:21:35.357 "aliases": [ 00:21:35.357 "lvs/nvme0n1p0" 00:21:35.357 ], 00:21:35.357 "product_name": "Logical Volume", 00:21:35.357 "block_size": 4096, 00:21:35.357 "num_blocks": 26476544, 00:21:35.357 "uuid": "01420b78-8e80-4194-a236-c5885e96dbdb", 00:21:35.357 "assigned_rate_limits": { 00:21:35.357 "rw_ios_per_sec": 0, 00:21:35.357 "rw_mbytes_per_sec": 0, 00:21:35.357 "r_mbytes_per_sec": 0, 00:21:35.357 "w_mbytes_per_sec": 0 00:21:35.357 }, 00:21:35.357 "claimed": false, 00:21:35.357 "zoned": false, 00:21:35.357 "supported_io_types": { 00:21:35.357 "read": true, 00:21:35.357 "write": true, 00:21:35.357 "unmap": true, 00:21:35.357 "flush": false, 00:21:35.357 "reset": true, 00:21:35.357 "nvme_admin": false, 00:21:35.357 "nvme_io": false, 00:21:35.357 "nvme_io_md": false, 00:21:35.357 "write_zeroes": true, 00:21:35.357 "zcopy": false, 00:21:35.357 "get_zone_info": false, 00:21:35.357 "zone_management": false, 00:21:35.357 "zone_append": false, 00:21:35.357 "compare": false, 00:21:35.357 "compare_and_write": false, 00:21:35.357 "abort": false, 00:21:35.357 "seek_hole": true, 00:21:35.357 "seek_data": true, 00:21:35.357 "copy": false, 00:21:35.357 "nvme_iov_md": false 00:21:35.357 }, 00:21:35.357 "driver_specific": { 00:21:35.357 "lvol": { 00:21:35.357 "lvol_store_uuid": "cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd", 00:21:35.357 "base_bdev": "nvme0n1", 00:21:35.357 "thin_provision": true, 00:21:35.357 "num_allocated_clusters": 0, 00:21:35.357 "snapshot": false, 00:21:35.357 "clone": false, 00:21:35.357 "esnap_clone": false 00:21:35.357 } 00:21:35.357 } 00:21:35.357 } 00:21:35.357 ]' 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:35.357 13:19:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 01420b78-8e80-4194-a236-c5885e96dbdb -c nvc0n1p0 --l2p_dram_limit 20 00:21:35.617 [2024-12-11 13:19:27.003457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.617 [2024-12-11 13:19:27.003536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.617 [2024-12-11 13:19:27.003555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:35.617 [2024-12-11 13:19:27.003570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.617 [2024-12-11 13:19:27.003650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.617 [2024-12-11 13:19:27.003666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.617 [2024-12-11 13:19:27.003677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:35.617 [2024-12-11 13:19:27.003692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.617 [2024-12-11 13:19:27.003714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.617 [2024-12-11 13:19:27.004869] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.617 [2024-12-11 13:19:27.005061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.617 [2024-12-11 13:19:27.005085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.617 [2024-12-11 13:19:27.005098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:21:35.617 [2024-12-11 13:19:27.005128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.005231] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b17deaca-0096-4fac-bb3f-740acac8d4aa 00:21:35.618 [2024-12-11 13:19:27.007730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.007767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:35.618 [2024-12-11 13:19:27.007789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:35.618 [2024-12-11 13:19:27.007800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.022184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.022330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.618 [2024-12-11 13:19:27.022415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.326 ms 00:21:35.618 [2024-12-11 13:19:27.022456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.022605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.022641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.618 [2024-12-11 13:19:27.022683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:35.618 [2024-12-11 13:19:27.022755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.022844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.022858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.618 [2024-12-11 13:19:27.022873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:35.618 [2024-12-11 13:19:27.022884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.022917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.618 [2024-12-11 13:19:27.029473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.029512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.618 [2024-12-11 13:19:27.029525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.582 ms 00:21:35.618 [2024-12-11 13:19:27.029545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.029603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.029626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.618 [2024-12-11 13:19:27.029644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:35.618 [2024-12-11 13:19:27.029662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.029699] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:35.618 [2024-12-11 13:19:27.029859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.618 [2024-12-11 13:19:27.029874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.618 [2024-12-11 13:19:27.029893] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.618 [2024-12-11 13:19:27.029907] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.618 [2024-12-11 13:19:27.029924] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.618 [2024-12-11 13:19:27.029937] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.618 [2024-12-11 13:19:27.029951] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.618 [2024-12-11 13:19:27.029962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.618 [2024-12-11 13:19:27.029977] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.618 [2024-12-11 13:19:27.029992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.030006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.618 [2024-12-11 13:19:27.030025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:35.618 [2024-12-11 13:19:27.030039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.030135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.618 [2024-12-11 13:19:27.030151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.618 [2024-12-11 13:19:27.030163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:35.618 [2024-12-11 13:19:27.030179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.618 [2024-12-11 13:19:27.030264] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.618 [2024-12-11 13:19:27.030284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.618 [2024-12-11 13:19:27.030295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.618 [2024-12-11 13:19:27.030333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.618 [2024-12-11 13:19:27.030365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.618 [2024-12-11 13:19:27.030387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.618 [2024-12-11 13:19:27.030416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.618 [2024-12-11 13:19:27.030425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.618 [2024-12-11 13:19:27.030437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.618 [2024-12-11 13:19:27.030449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.618 [2024-12-11 13:19:27.030466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.618 [2024-12-11 13:19:27.030487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.618 [2024-12-11 13:19:27.030519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.618 [2024-12-11 13:19:27.030554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.618 [2024-12-11 13:19:27.030586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.618 [2024-12-11 13:19:27.030619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.618 [2024-12-11 13:19:27.030654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.618 [2024-12-11 13:19:27.030675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.618 [2024-12-11 13:19:27.030687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.618 [2024-12-11 13:19:27.030696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.618 [2024-12-11 13:19:27.030711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.618 [2024-12-11 13:19:27.030720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.618 [2024-12-11 13:19:27.030732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.618 [2024-12-11 13:19:27.030755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.618 [2024-12-11 13:19:27.030764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030775] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.618 [2024-12-11 13:19:27.030786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.618 [2024-12-11 13:19:27.030799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.618 [2024-12-11 13:19:27.030829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.618 [2024-12-11 13:19:27.030839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.618 [2024-12-11 13:19:27.030851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.618 [2024-12-11 13:19:27.030861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.618 [2024-12-11 13:19:27.030874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.618 [2024-12-11 13:19:27.030884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.618 [2024-12-11 13:19:27.030899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.619 [2024-12-11 13:19:27.030911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.030926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.619 [2024-12-11 13:19:27.030937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.619 [2024-12-11 13:19:27.030950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.619 [2024-12-11 13:19:27.030961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.619 [2024-12-11 13:19:27.030974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.619 [2024-12-11 13:19:27.030984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.619 [2024-12-11 13:19:27.030998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.619 [2024-12-11 13:19:27.031008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.619 [2024-12-11 13:19:27.031027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.619 [2024-12-11 13:19:27.031038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.031051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.031062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.031075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.031086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.619 [2024-12-11 13:19:27.031099] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.619 [2024-12-11 13:19:27.031111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.031140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.619 [2024-12-11 13:19:27.031151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.619 [2024-12-11 13:19:27.031165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.619 [2024-12-11 13:19:27.031176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.619 [2024-12-11 13:19:27.031190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.619 [2024-12-11 13:19:27.031201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.619 [2024-12-11 13:19:27.031215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:21:35.619 [2024-12-11 13:19:27.031226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.619 [2024-12-11 13:19:27.031275] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:35.619 [2024-12-11 13:19:27.031288] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:39.813 [2024-12-11 13:19:30.485600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.813 [2024-12-11 13:19:30.485683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:39.813 [2024-12-11 13:19:30.485707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3459.929 ms 00:21:39.813 [2024-12-11 13:19:30.485719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.813 [2024-12-11 13:19:30.534586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.813 [2024-12-11 13:19:30.534645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.813 [2024-12-11 13:19:30.534667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.585 ms 00:21:39.813 [2024-12-11 13:19:30.534679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.813 [2024-12-11 13:19:30.534858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.813 [2024-12-11 13:19:30.534872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:39.813 [2024-12-11 13:19:30.534901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:39.813 [2024-12-11 13:19:30.534912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.813 [2024-12-11 13:19:30.601503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.813 [2024-12-11 13:19:30.601731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:39.813 [2024-12-11 13:19:30.601767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.627 ms 00:21:39.813 [2024-12-11 13:19:30.601780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.813 [2024-12-11 13:19:30.601847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.813 [2024-12-11 13:19:30.601859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.813 [2024-12-11 13:19:30.601873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:39.813 [2024-12-11 13:19:30.601888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.813 [2024-12-11 13:19:30.602738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.813 [2024-12-11 13:19:30.602754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.813 [2024-12-11 13:19:30.602769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:21:39.813 [2024-12-11 13:19:30.602780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.602913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.602927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.814 [2024-12-11 13:19:30.602945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:39.814 [2024-12-11 13:19:30.602955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.626506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.626550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.814 [2024-12-11 13:19:30.626570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.561 ms 00:21:39.814 [2024-12-11 13:19:30.626595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.641610] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:39.814 [2024-12-11 13:19:30.651027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.651069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:39.814 [2024-12-11 13:19:30.651101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.350 ms 00:21:39.814 [2024-12-11 13:19:30.651116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.744383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.744621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:39.814 [2024-12-11 13:19:30.744651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.356 ms 00:21:39.814 [2024-12-11 13:19:30.744667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.744907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.744930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:39.814 [2024-12-11 13:19:30.744943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:21:39.814 [2024-12-11 13:19:30.744963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.781666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.781715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:39.814 [2024-12-11 13:19:30.781731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.707 ms 00:21:39.814 [2024-12-11 13:19:30.781746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.817549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.817727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:39.814 [2024-12-11 13:19:30.817751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.820 ms 00:21:39.814 [2024-12-11 13:19:30.817766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.818631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.818663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:39.814 [2024-12-11 13:19:30.818676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:21:39.814 [2024-12-11 13:19:30.818690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.917010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.917094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:39.814 [2024-12-11 13:19:30.917138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.421 ms 00:21:39.814 [2024-12-11 13:19:30.917154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.955906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.955964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:39.814 [2024-12-11 13:19:30.956001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.710 ms 00:21:39.814 [2024-12-11 13:19:30.956016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:30.992583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:30.992786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:39.814 [2024-12-11 13:19:30.992809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.583 ms 00:21:39.814 [2024-12-11 13:19:30.992823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:31.028830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:31.028875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:39.814 [2024-12-11 13:19:31.028891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.973 ms 00:21:39.814 [2024-12-11 13:19:31.028905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:31.028949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:31.028968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:39.814 [2024-12-11 13:19:31.028981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:39.814 [2024-12-11 13:19:31.028994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:31.029107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.814 [2024-12-11 13:19:31.029143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:39.814 [2024-12-11 13:19:31.029155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:39.814 [2024-12-11 13:19:31.029168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.814 [2024-12-11 13:19:31.030596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4033.147 ms, result 0 00:21:39.814 { 00:21:39.814 "name": "ftl0", 00:21:39.814 "uuid": "b17deaca-0096-4fac-bb3f-740acac8d4aa" 00:21:39.814 } 00:21:39.814 13:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:39.814 13:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:39.814 13:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:39.814 13:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:39.814 [2024-12-11 13:19:31.350398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:39.814 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:39.814 Zero copy mechanism will not be used. 00:21:39.814 Running I/O for 4 seconds... 00:21:42.129 1569.00 IOPS, 104.19 MiB/s [2024-12-11T13:19:34.635Z] 1567.00 IOPS, 104.06 MiB/s [2024-12-11T13:19:35.572Z] 1573.33 IOPS, 104.48 MiB/s [2024-12-11T13:19:35.572Z] 1600.75 IOPS, 106.30 MiB/s 00:21:44.004 Latency(us) 00:21:44.004 [2024-12-11T13:19:35.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.004 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:44.004 ftl0 : 4.00 1600.27 106.27 0.00 0.00 655.24 259.91 18529.05 00:21:44.004 [2024-12-11T13:19:35.572Z] =================================================================================================================== 00:21:44.004 [2024-12-11T13:19:35.572Z] Total : 1600.27 106.27 0.00 0.00 655.24 259.91 18529.05 00:21:44.004 [2024-12-11 13:19:35.356652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:44.004 { 00:21:44.004 "results": [ 00:21:44.004 { 00:21:44.004 "job": "ftl0", 00:21:44.004 "core_mask": "0x1", 00:21:44.004 "workload": "randwrite", 00:21:44.004 "status": "finished", 00:21:44.005 "queue_depth": 1, 00:21:44.005 "io_size": 69632, 00:21:44.005 "runtime": 4.001815, 00:21:44.005 "iops": 1600.273875728888, 00:21:44.005 "mibps": 106.26818706012148, 00:21:44.005 "io_failed": 0, 00:21:44.005 "io_timeout": 0, 00:21:44.005 "avg_latency_us": 655.2360357106126, 00:21:44.005 "min_latency_us": 259.906827309237, 00:21:44.005 "max_latency_us": 18529.053815261042 00:21:44.005 } 00:21:44.005 ], 00:21:44.005 "core_count": 1 00:21:44.005 } 00:21:44.005 13:19:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:44.005 [2024-12-11 13:19:35.475275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:44.005 Running I/O for 4 seconds... 00:21:46.318 11642.00 IOPS, 45.48 MiB/s [2024-12-11T13:19:38.821Z] 11008.00 IOPS, 43.00 MiB/s [2024-12-11T13:19:39.756Z] 10666.33 IOPS, 41.67 MiB/s [2024-12-11T13:19:39.756Z] 10688.25 IOPS, 41.75 MiB/s 00:21:48.188 Latency(us) 00:21:48.188 [2024-12-11T13:19:39.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.188 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:48.188 ftl0 : 4.02 10676.75 41.71 0.00 0.00 11963.70 235.23 20529.35 00:21:48.188 [2024-12-11T13:19:39.756Z] =================================================================================================================== 00:21:48.188 [2024-12-11T13:19:39.756Z] Total : 10676.75 41.71 0.00 0.00 11963.70 0.00 20529.35 00:21:48.188 { 00:21:48.188 "results": [ 00:21:48.188 { 00:21:48.188 "job": "ftl0", 00:21:48.188 "core_mask": "0x1", 00:21:48.188 "workload": "randwrite", 00:21:48.188 "status": "finished", 00:21:48.188 "queue_depth": 128, 00:21:48.188 "io_size": 4096, 00:21:48.188 "runtime": 4.016109, 00:21:48.188 "iops": 10676.752050305407, 00:21:48.188 "mibps": 41.7060626965055, 00:21:48.188 "io_failed": 0, 00:21:48.188 "io_timeout": 0, 00:21:48.188 "avg_latency_us": 11963.702350173568, 00:21:48.188 "min_latency_us": 235.23212851405623, 00:21:48.188 "max_latency_us": 20529.349397590362 00:21:48.188 } 00:21:48.188 ], 00:21:48.188 "core_count": 1 00:21:48.188 } 00:21:48.188 [2024-12-11 13:19:39.496432] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:48.188 13:19:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:48.188 [2024-12-11 13:19:39.622945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:48.188 Running I/O for 4 seconds... 00:21:50.504 7439.00 IOPS, 29.06 MiB/s [2024-12-11T13:19:42.639Z] 8064.50 IOPS, 31.50 MiB/s [2024-12-11T13:19:44.018Z] 8259.33 IOPS, 32.26 MiB/s [2024-12-11T13:19:44.018Z] 7962.75 IOPS, 31.10 MiB/s 00:21:52.450 Latency(us) 00:21:52.450 [2024-12-11T13:19:44.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.450 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:52.450 Verification LBA range: start 0x0 length 0x1400000 00:21:52.450 ftl0 : 4.01 7973.03 31.14 0.00 0.00 16003.82 274.71 21266.30 00:21:52.450 [2024-12-11T13:19:44.018Z] =================================================================================================================== 00:21:52.450 [2024-12-11T13:19:44.018Z] Total : 7973.03 31.14 0.00 0.00 16003.82 0.00 21266.30 00:21:52.450 [2024-12-11 13:19:43.648555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:52.450 { 00:21:52.450 "results": [ 00:21:52.450 { 00:21:52.450 "job": "ftl0", 00:21:52.450 "core_mask": "0x1", 00:21:52.450 "workload": "verify", 00:21:52.450 "status": "finished", 00:21:52.450 "verify_range": { 00:21:52.450 "start": 0, 00:21:52.450 "length": 20971520 00:21:52.450 }, 00:21:52.450 "queue_depth": 128, 00:21:52.450 "io_size": 4096, 00:21:52.450 "runtime": 4.010773, 00:21:52.450 "iops": 7973.026645985699, 00:21:52.450 "mibps": 31.144635335881638, 00:21:52.450 "io_failed": 0, 00:21:52.450 "io_timeout": 0, 00:21:52.450 "avg_latency_us": 16003.821853226904, 00:21:52.450 "min_latency_us": 274.7116465863454, 00:21:52.450 "max_latency_us": 21266.300401606426 00:21:52.450 } 00:21:52.450 ], 00:21:52.450 "core_count": 1 00:21:52.450 } 00:21:52.450 13:19:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:52.450 [2024-12-11 13:19:43.869188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.450 [2024-12-11 13:19:43.869789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:52.450 [2024-12-11 13:19:43.869822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:52.450 [2024-12-11 13:19:43.869838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.450 [2024-12-11 13:19:43.869888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.450 [2024-12-11 13:19:43.874813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.450 [2024-12-11 13:19:43.874846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:52.450 [2024-12-11 13:19:43.874865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:21:52.450 [2024-12-11 13:19:43.874876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.450 [2024-12-11 13:19:43.877013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.450 [2024-12-11 13:19:43.877177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:52.450 [2024-12-11 13:19:43.877209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:21:52.450 [2024-12-11 13:19:43.877225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.091801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.092045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:52.710 [2024-12-11 13:19:44.092125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.884 ms 00:21:52.710 [2024-12-11 13:19:44.092138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.097155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.097189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:52.710 [2024-12-11 13:19:44.097205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.962 ms 00:21:52.710 [2024-12-11 13:19:44.097220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.134758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.134943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:52.710 [2024-12-11 13:19:44.134972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.509 ms 00:21:52.710 [2024-12-11 13:19:44.134983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.157359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.157522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:52.710 [2024-12-11 13:19:44.157557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.369 ms 00:21:52.710 [2024-12-11 13:19:44.157569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.157773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.157789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:52.710 [2024-12-11 13:19:44.157808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:52.710 [2024-12-11 13:19:44.157819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.193443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.193495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:52.710 [2024-12-11 13:19:44.193513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.659 ms 00:21:52.710 [2024-12-11 13:19:44.193538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.228725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.228760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:52.710 [2024-12-11 13:19:44.228775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.193 ms 00:21:52.710 [2024-12-11 13:19:44.228800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.710 [2024-12-11 13:19:44.263421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.710 [2024-12-11 13:19:44.263583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:52.710 [2024-12-11 13:19:44.263630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.632 ms 00:21:52.710 [2024-12-11 13:19:44.263641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.971 [2024-12-11 13:19:44.299725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.971 [2024-12-11 13:19:44.299763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:52.971 [2024-12-11 13:19:44.299782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.039 ms 00:21:52.971 [2024-12-11 13:19:44.299809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.971 [2024-12-11 13:19:44.299851] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:52.971 [2024-12-11 13:19:44.299870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.299990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.300995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.301006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:52.971 [2024-12-11 13:19:44.301020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:52.972 [2024-12-11 13:19:44.301233] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:52.972 [2024-12-11 13:19:44.301247] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b17deaca-0096-4fac-bb3f-740acac8d4aa 00:21:52.972 [2024-12-11 13:19:44.301262] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:52.972 [2024-12-11 13:19:44.301276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:52.972 [2024-12-11 13:19:44.301285] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:52.972 [2024-12-11 13:19:44.301299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:52.972 [2024-12-11 13:19:44.301310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:52.972 [2024-12-11 13:19:44.301324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:52.972 [2024-12-11 13:19:44.301335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:52.972 [2024-12-11 13:19:44.301350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:52.972 [2024-12-11 13:19:44.301360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:52.972 [2024-12-11 13:19:44.301373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.972 [2024-12-11 13:19:44.301384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:52.972 [2024-12-11 13:19:44.301398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:21:52.972 [2024-12-11 13:19:44.301409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.322217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.972 [2024-12-11 13:19:44.322251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:52.972 [2024-12-11 13:19:44.322268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.783 ms 00:21:52.972 [2024-12-11 13:19:44.322279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.322848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.972 [2024-12-11 13:19:44.322863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:52.972 [2024-12-11 13:19:44.322877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:21:52.972 [2024-12-11 13:19:44.322888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.381316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.972 [2024-12-11 13:19:44.381511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.972 [2024-12-11 13:19:44.381544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.972 [2024-12-11 13:19:44.381563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.381637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.972 [2024-12-11 13:19:44.381650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.972 [2024-12-11 13:19:44.381665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.972 [2024-12-11 13:19:44.381675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.381776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.972 [2024-12-11 13:19:44.381790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.972 [2024-12-11 13:19:44.381804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.972 [2024-12-11 13:19:44.381815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.381838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.972 [2024-12-11 13:19:44.381849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.972 [2024-12-11 13:19:44.381863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.972 [2024-12-11 13:19:44.381873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.972 [2024-12-11 13:19:44.513085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.972 [2024-12-11 13:19:44.513424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.972 [2024-12-11 13:19:44.513462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.972 [2024-12-11 13:19:44.513473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.618909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.618989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.232 [2024-12-11 13:19:44.619025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.619249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.232 [2024-12-11 13:19:44.619264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.619352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.232 [2024-12-11 13:19:44.619367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.619529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.232 [2024-12-11 13:19:44.619547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.619619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.232 [2024-12-11 13:19:44.619633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.619709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.232 [2024-12-11 13:19:44.619724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.232 [2024-12-11 13:19:44.619815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.232 [2024-12-11 13:19:44.619829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.232 [2024-12-11 13:19:44.619839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.232 [2024-12-11 13:19:44.619999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 751.971 ms, result 0 00:21:53.232 true 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 79217 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 79217 ']' 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 79217 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79217 00:21:53.232 killing process with pid 79217 00:21:53.232 Received shutdown signal, test time was about 4.000000 seconds 00:21:53.232 00:21:53.232 Latency(us) 00:21:53.232 [2024-12-11T13:19:44.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.232 [2024-12-11T13:19:44.800Z] =================================================================================================================== 00:21:53.232 [2024-12-11T13:19:44.800Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79217' 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 79217 00:21:53.232 13:19:44 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 79217 00:21:54.611 Remove shared memory files 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:54.611 ************************************ 00:21:54.611 END TEST ftl_bdevperf 00:21:54.611 ************************************ 00:21:54.611 00:21:54.611 real 0m23.507s 00:21:54.611 user 0m26.017s 00:21:54.611 sys 0m1.490s 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.611 13:19:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 13:19:46 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:54.873 13:19:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:54.873 13:19:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.873 13:19:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 ************************************ 00:21:54.873 START TEST ftl_trim 00:21:54.873 ************************************ 00:21:54.873 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:54.873 * Looking for test storage... 00:21:54.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:54.873 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:54.873 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:21:54.873 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:54.873 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.873 13:19:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.133 13:19:46 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:55.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.133 --rc genhtml_branch_coverage=1 00:21:55.133 --rc genhtml_function_coverage=1 00:21:55.133 --rc genhtml_legend=1 00:21:55.133 --rc geninfo_all_blocks=1 00:21:55.133 --rc geninfo_unexecuted_blocks=1 00:21:55.133 00:21:55.133 ' 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:55.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.133 --rc genhtml_branch_coverage=1 00:21:55.133 --rc genhtml_function_coverage=1 00:21:55.133 --rc genhtml_legend=1 00:21:55.133 --rc geninfo_all_blocks=1 00:21:55.133 --rc geninfo_unexecuted_blocks=1 00:21:55.133 00:21:55.133 ' 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:55.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.133 --rc genhtml_branch_coverage=1 00:21:55.133 --rc genhtml_function_coverage=1 00:21:55.133 --rc genhtml_legend=1 00:21:55.133 --rc geninfo_all_blocks=1 00:21:55.133 --rc geninfo_unexecuted_blocks=1 00:21:55.133 00:21:55.133 ' 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:55.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.133 --rc genhtml_branch_coverage=1 00:21:55.133 --rc genhtml_function_coverage=1 00:21:55.133 --rc genhtml_legend=1 00:21:55.133 --rc geninfo_all_blocks=1 00:21:55.133 --rc geninfo_unexecuted_blocks=1 00:21:55.133 00:21:55.133 ' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79575 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:55.133 13:19:46 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79575 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79575 ']' 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.133 13:19:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:55.133 [2024-12-11 13:19:46.614359] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:21:55.133 [2024-12-11 13:19:46.614746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79575 ] 00:21:55.393 [2024-12-11 13:19:46.804432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:55.393 [2024-12-11 13:19:46.950629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.393 [2024-12-11 13:19:46.950767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.393 [2024-12-11 13:19:46.950803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.778 13:19:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.778 13:19:47 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:56.778 13:19:47 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:56.778 13:19:47 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:56.778 13:19:47 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:56.778 13:19:47 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:56.778 13:19:47 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:56.778 13:19:47 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:56.778 13:19:48 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:56.778 13:19:48 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:56.778 13:19:48 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:56.778 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:56.778 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:56.778 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:56.778 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:56.778 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:57.038 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:57.038 { 00:21:57.038 "name": "nvme0n1", 00:21:57.038 "aliases": [ 00:21:57.038 "04dde0f9-9ce5-43d0-91bc-5377ded85669" 00:21:57.038 ], 00:21:57.038 "product_name": "NVMe disk", 00:21:57.038 "block_size": 4096, 00:21:57.038 "num_blocks": 1310720, 00:21:57.038 "uuid": "04dde0f9-9ce5-43d0-91bc-5377ded85669", 00:21:57.038 "numa_id": -1, 00:21:57.038 "assigned_rate_limits": { 00:21:57.038 "rw_ios_per_sec": 0, 00:21:57.038 "rw_mbytes_per_sec": 0, 00:21:57.038 "r_mbytes_per_sec": 0, 00:21:57.038 "w_mbytes_per_sec": 0 00:21:57.038 }, 00:21:57.038 "claimed": true, 00:21:57.038 "claim_type": "read_many_write_one", 00:21:57.038 "zoned": false, 00:21:57.038 "supported_io_types": { 00:21:57.038 "read": true, 00:21:57.038 "write": true, 00:21:57.038 "unmap": true, 00:21:57.038 "flush": true, 00:21:57.038 "reset": true, 00:21:57.038 "nvme_admin": true, 00:21:57.038 "nvme_io": true, 00:21:57.038 "nvme_io_md": false, 00:21:57.038 "write_zeroes": true, 00:21:57.038 "zcopy": false, 00:21:57.038 "get_zone_info": false, 00:21:57.039 "zone_management": false, 00:21:57.039 "zone_append": false, 00:21:57.039 "compare": true, 00:21:57.039 "compare_and_write": false, 00:21:57.039 "abort": true, 00:21:57.039 "seek_hole": false, 00:21:57.039 "seek_data": false, 00:21:57.039 "copy": true, 00:21:57.039 "nvme_iov_md": false 00:21:57.039 }, 00:21:57.039 "driver_specific": { 00:21:57.039 "nvme": [ 00:21:57.039 { 00:21:57.039 "pci_address": "0000:00:11.0", 00:21:57.039 "trid": { 00:21:57.039 "trtype": "PCIe", 00:21:57.039 "traddr": "0000:00:11.0" 00:21:57.039 }, 00:21:57.039 "ctrlr_data": { 00:21:57.039 "cntlid": 0, 00:21:57.039 "vendor_id": "0x1b36", 00:21:57.039 "model_number": "QEMU NVMe Ctrl", 00:21:57.039 "serial_number": "12341", 00:21:57.039 "firmware_revision": "8.0.0", 00:21:57.039 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:57.039 "oacs": { 00:21:57.039 "security": 0, 00:21:57.039 "format": 1, 00:21:57.039 "firmware": 0, 00:21:57.039 "ns_manage": 1 00:21:57.039 }, 00:21:57.039 "multi_ctrlr": false, 00:21:57.039 "ana_reporting": false 00:21:57.039 }, 00:21:57.039 "vs": { 00:21:57.039 "nvme_version": "1.4" 00:21:57.039 }, 00:21:57.039 "ns_data": { 00:21:57.039 "id": 1, 00:21:57.039 "can_share": false 00:21:57.039 } 00:21:57.039 } 00:21:57.039 ], 00:21:57.039 "mp_policy": "active_passive" 00:21:57.039 } 00:21:57.039 } 00:21:57.039 ]' 00:21:57.039 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:57.039 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:57.039 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:57.039 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:57.039 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:57.039 13:19:48 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:57.039 13:19:48 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:57.039 13:19:48 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:57.039 13:19:48 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:57.039 13:19:48 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:57.039 13:19:48 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:57.298 13:19:48 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd 00:21:57.299 13:19:48 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:57.299 13:19:48 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd2a81a9-f4b0-4ac2-92f4-bf6ec26aadbd 00:21:57.558 13:19:49 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:57.819 13:19:49 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=27bc1867-2df9-49a4-880a-2936b911cb4c 00:21:57.819 13:19:49 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 27bc1867-2df9-49a4-880a-2936b911cb4c 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:58.079 13:19:49 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.079 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.079 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:58.079 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:58.079 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:58.079 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:58.340 { 00:21:58.340 "name": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:21:58.340 "aliases": [ 00:21:58.340 "lvs/nvme0n1p0" 00:21:58.340 ], 00:21:58.340 "product_name": "Logical Volume", 00:21:58.340 "block_size": 4096, 00:21:58.340 "num_blocks": 26476544, 00:21:58.340 "uuid": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:21:58.340 "assigned_rate_limits": { 00:21:58.340 "rw_ios_per_sec": 0, 00:21:58.340 "rw_mbytes_per_sec": 0, 00:21:58.340 "r_mbytes_per_sec": 0, 00:21:58.340 "w_mbytes_per_sec": 0 00:21:58.340 }, 00:21:58.340 "claimed": false, 00:21:58.340 "zoned": false, 00:21:58.340 "supported_io_types": { 00:21:58.340 "read": true, 00:21:58.340 "write": true, 00:21:58.340 "unmap": true, 00:21:58.340 "flush": false, 00:21:58.340 "reset": true, 00:21:58.340 "nvme_admin": false, 00:21:58.340 "nvme_io": false, 00:21:58.340 "nvme_io_md": false, 00:21:58.340 "write_zeroes": true, 00:21:58.340 "zcopy": false, 00:21:58.340 "get_zone_info": false, 00:21:58.340 "zone_management": false, 00:21:58.340 "zone_append": false, 00:21:58.340 "compare": false, 00:21:58.340 "compare_and_write": false, 00:21:58.340 "abort": false, 00:21:58.340 "seek_hole": true, 00:21:58.340 "seek_data": true, 00:21:58.340 "copy": false, 00:21:58.340 "nvme_iov_md": false 00:21:58.340 }, 00:21:58.340 "driver_specific": { 00:21:58.340 "lvol": { 00:21:58.340 "lvol_store_uuid": "27bc1867-2df9-49a4-880a-2936b911cb4c", 00:21:58.340 "base_bdev": "nvme0n1", 00:21:58.340 "thin_provision": true, 00:21:58.340 "num_allocated_clusters": 0, 00:21:58.340 "snapshot": false, 00:21:58.340 "clone": false, 00:21:58.340 "esnap_clone": false 00:21:58.340 } 00:21:58.340 } 00:21:58.340 } 00:21:58.340 ]' 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:58.340 13:19:49 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:58.340 13:19:49 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:58.340 13:19:49 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:58.340 13:19:49 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:58.600 13:19:50 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:58.600 13:19:50 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:58.600 13:19:50 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.600 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.600 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:58.600 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:58.600 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:58.600 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:58.860 { 00:21:58.860 "name": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:21:58.860 "aliases": [ 00:21:58.860 "lvs/nvme0n1p0" 00:21:58.860 ], 00:21:58.860 "product_name": "Logical Volume", 00:21:58.860 "block_size": 4096, 00:21:58.860 "num_blocks": 26476544, 00:21:58.860 "uuid": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:21:58.860 "assigned_rate_limits": { 00:21:58.860 "rw_ios_per_sec": 0, 00:21:58.860 "rw_mbytes_per_sec": 0, 00:21:58.860 "r_mbytes_per_sec": 0, 00:21:58.860 "w_mbytes_per_sec": 0 00:21:58.860 }, 00:21:58.860 "claimed": false, 00:21:58.860 "zoned": false, 00:21:58.860 "supported_io_types": { 00:21:58.860 "read": true, 00:21:58.860 "write": true, 00:21:58.860 "unmap": true, 00:21:58.860 "flush": false, 00:21:58.860 "reset": true, 00:21:58.860 "nvme_admin": false, 00:21:58.860 "nvme_io": false, 00:21:58.860 "nvme_io_md": false, 00:21:58.860 "write_zeroes": true, 00:21:58.860 "zcopy": false, 00:21:58.860 "get_zone_info": false, 00:21:58.860 "zone_management": false, 00:21:58.860 "zone_append": false, 00:21:58.860 "compare": false, 00:21:58.860 "compare_and_write": false, 00:21:58.860 "abort": false, 00:21:58.860 "seek_hole": true, 00:21:58.860 "seek_data": true, 00:21:58.860 "copy": false, 00:21:58.860 "nvme_iov_md": false 00:21:58.860 }, 00:21:58.860 "driver_specific": { 00:21:58.860 "lvol": { 00:21:58.860 "lvol_store_uuid": "27bc1867-2df9-49a4-880a-2936b911cb4c", 00:21:58.860 "base_bdev": "nvme0n1", 00:21:58.860 "thin_provision": true, 00:21:58.860 "num_allocated_clusters": 0, 00:21:58.860 "snapshot": false, 00:21:58.860 "clone": false, 00:21:58.860 "esnap_clone": false 00:21:58.860 } 00:21:58.860 } 00:21:58.860 } 00:21:58.860 ]' 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:58.860 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:58.860 13:19:50 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:58.860 13:19:50 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:59.119 13:19:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:59.119 13:19:50 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:59.119 13:19:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:59.119 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:59.119 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:59.119 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:59.119 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:59.119 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f0d45c1-3115-45fa-b087-0f00e89ffc28 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:59.378 { 00:21:59.378 "name": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:21:59.378 "aliases": [ 00:21:59.378 "lvs/nvme0n1p0" 00:21:59.378 ], 00:21:59.378 "product_name": "Logical Volume", 00:21:59.378 "block_size": 4096, 00:21:59.378 "num_blocks": 26476544, 00:21:59.378 "uuid": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:21:59.378 "assigned_rate_limits": { 00:21:59.378 "rw_ios_per_sec": 0, 00:21:59.378 "rw_mbytes_per_sec": 0, 00:21:59.378 "r_mbytes_per_sec": 0, 00:21:59.378 "w_mbytes_per_sec": 0 00:21:59.378 }, 00:21:59.378 "claimed": false, 00:21:59.378 "zoned": false, 00:21:59.378 "supported_io_types": { 00:21:59.378 "read": true, 00:21:59.378 "write": true, 00:21:59.378 "unmap": true, 00:21:59.378 "flush": false, 00:21:59.378 "reset": true, 00:21:59.378 "nvme_admin": false, 00:21:59.378 "nvme_io": false, 00:21:59.378 "nvme_io_md": false, 00:21:59.378 "write_zeroes": true, 00:21:59.378 "zcopy": false, 00:21:59.378 "get_zone_info": false, 00:21:59.378 "zone_management": false, 00:21:59.378 "zone_append": false, 00:21:59.378 "compare": false, 00:21:59.378 "compare_and_write": false, 00:21:59.378 "abort": false, 00:21:59.378 "seek_hole": true, 00:21:59.378 "seek_data": true, 00:21:59.378 "copy": false, 00:21:59.378 "nvme_iov_md": false 00:21:59.378 }, 00:21:59.378 "driver_specific": { 00:21:59.378 "lvol": { 00:21:59.378 "lvol_store_uuid": "27bc1867-2df9-49a4-880a-2936b911cb4c", 00:21:59.378 "base_bdev": "nvme0n1", 00:21:59.378 "thin_provision": true, 00:21:59.378 "num_allocated_clusters": 0, 00:21:59.378 "snapshot": false, 00:21:59.378 "clone": false, 00:21:59.378 "esnap_clone": false 00:21:59.378 } 00:21:59.378 } 00:21:59.378 } 00:21:59.378 ]' 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:59.378 13:19:50 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:59.378 13:19:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:59.378 13:19:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3f0d45c1-3115-45fa-b087-0f00e89ffc28 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:59.638 [2024-12-11 13:19:51.080401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.080465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:59.638 [2024-12-11 13:19:51.080487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:59.638 [2024-12-11 13:19:51.080499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.084288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.084331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.638 [2024-12-11 13:19:51.084348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:21:59.638 [2024-12-11 13:19:51.084359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.084488] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:59.638 [2024-12-11 13:19:51.085509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:59.638 [2024-12-11 13:19:51.085548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.085567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.638 [2024-12-11 13:19:51.085582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:21:59.638 [2024-12-11 13:19:51.085592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.085717] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:21:59.638 [2024-12-11 13:19:51.088151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.088189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:59.638 [2024-12-11 13:19:51.088202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:59.638 [2024-12-11 13:19:51.088216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.102122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.102162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.638 [2024-12-11 13:19:51.102183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.838 ms 00:21:59.638 [2024-12-11 13:19:51.102197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.102395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.102415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.638 [2024-12-11 13:19:51.102427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:59.638 [2024-12-11 13:19:51.102446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.102490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.102505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:59.638 [2024-12-11 13:19:51.102516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:59.638 [2024-12-11 13:19:51.102534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.102573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:59.638 [2024-12-11 13:19:51.108677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.108715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.638 [2024-12-11 13:19:51.108732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:21:59.638 [2024-12-11 13:19:51.108743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.108821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.638 [2024-12-11 13:19:51.108860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:59.638 [2024-12-11 13:19:51.108876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:59.638 [2024-12-11 13:19:51.108887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.638 [2024-12-11 13:19:51.108929] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:59.638 [2024-12-11 13:19:51.109074] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:59.639 [2024-12-11 13:19:51.109096] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:59.639 [2024-12-11 13:19:51.109111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:59.639 [2024-12-11 13:19:51.109144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109157] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:59.639 [2024-12-11 13:19:51.109185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:59.639 [2024-12-11 13:19:51.109199] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:59.639 [2024-12-11 13:19:51.109212] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:59.639 [2024-12-11 13:19:51.109227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.639 [2024-12-11 13:19:51.109252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:59.639 [2024-12-11 13:19:51.109266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:21:59.639 [2024-12-11 13:19:51.109276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.639 [2024-12-11 13:19:51.109374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.639 [2024-12-11 13:19:51.109385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:59.639 [2024-12-11 13:19:51.109400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:59.639 [2024-12-11 13:19:51.109410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.639 [2024-12-11 13:19:51.109557] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:59.639 [2024-12-11 13:19:51.109571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:59.639 [2024-12-11 13:19:51.109586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:59.639 [2024-12-11 13:19:51.109621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:59.639 [2024-12-11 13:19:51.109657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:59.639 [2024-12-11 13:19:51.109678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:59.639 [2024-12-11 13:19:51.109688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:59.639 [2024-12-11 13:19:51.109700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:59.639 [2024-12-11 13:19:51.109709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:59.639 [2024-12-11 13:19:51.109722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:59.639 [2024-12-11 13:19:51.109734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:59.639 [2024-12-11 13:19:51.109759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:59.639 [2024-12-11 13:19:51.109794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:59.639 [2024-12-11 13:19:51.109824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:59.639 [2024-12-11 13:19:51.109858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:59.639 [2024-12-11 13:19:51.109889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.639 [2024-12-11 13:19:51.109911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:59.639 [2024-12-11 13:19:51.109925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:59.639 [2024-12-11 13:19:51.109934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:59.639 [2024-12-11 13:19:51.109948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:59.639 [2024-12-11 13:19:51.109957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:59.639 [2024-12-11 13:19:51.109968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:59.639 [2024-12-11 13:19:51.109977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:59.639 [2024-12-11 13:19:51.109989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:59.639 [2024-12-11 13:19:51.109999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.110011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:59.639 [2024-12-11 13:19:51.110020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:59.639 [2024-12-11 13:19:51.110032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.110041] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:59.639 [2024-12-11 13:19:51.110054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:59.639 [2024-12-11 13:19:51.110064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:59.639 [2024-12-11 13:19:51.110077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.639 [2024-12-11 13:19:51.110089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:59.639 [2024-12-11 13:19:51.110105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:59.639 [2024-12-11 13:19:51.110123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:59.639 [2024-12-11 13:19:51.110137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:59.639 [2024-12-11 13:19:51.110146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:59.639 [2024-12-11 13:19:51.110159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:59.639 [2024-12-11 13:19:51.110170] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:59.639 [2024-12-11 13:19:51.110186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:59.639 [2024-12-11 13:19:51.110201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:59.639 [2024-12-11 13:19:51.110215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:59.639 [2024-12-11 13:19:51.110226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:59.639 [2024-12-11 13:19:51.110239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:59.639 [2024-12-11 13:19:51.110249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:59.639 [2024-12-11 13:19:51.110264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:59.639 [2024-12-11 13:19:51.110274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:59.640 [2024-12-11 13:19:51.110288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:59.640 [2024-12-11 13:19:51.110298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:59.640 [2024-12-11 13:19:51.110315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:59.640 [2024-12-11 13:19:51.110326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:59.640 [2024-12-11 13:19:51.110340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:59.640 [2024-12-11 13:19:51.110350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:59.640 [2024-12-11 13:19:51.110364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:59.640 [2024-12-11 13:19:51.110374] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:59.640 [2024-12-11 13:19:51.110406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:59.640 [2024-12-11 13:19:51.110417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:59.640 [2024-12-11 13:19:51.110431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:59.640 [2024-12-11 13:19:51.110441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:59.640 [2024-12-11 13:19:51.110455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:59.640 [2024-12-11 13:19:51.110466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.640 [2024-12-11 13:19:51.110480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:59.640 [2024-12-11 13:19:51.110491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:21:59.640 [2024-12-11 13:19:51.110504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.640 [2024-12-11 13:19:51.110600] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:59.640 [2024-12-11 13:19:51.110620] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:03.840 [2024-12-11 13:19:54.536336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.536663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:03.840 [2024-12-11 13:19:54.536693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3431.296 ms 00:22:03.840 [2024-12-11 13:19:54.536709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.582091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.582162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.840 [2024-12-11 13:19:54.582183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.061 ms 00:22:03.840 [2024-12-11 13:19:54.582198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.582411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.582429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.840 [2024-12-11 13:19:54.582466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:03.840 [2024-12-11 13:19:54.582485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.649888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.649952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.840 [2024-12-11 13:19:54.649970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.465 ms 00:22:03.840 [2024-12-11 13:19:54.649986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.650134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.650153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.840 [2024-12-11 13:19:54.650165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.840 [2024-12-11 13:19:54.650179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.650921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.650947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.840 [2024-12-11 13:19:54.650959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:22:03.840 [2024-12-11 13:19:54.650972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.651103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.651126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.840 [2024-12-11 13:19:54.651155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:03.840 [2024-12-11 13:19:54.651173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.676848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.676909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.840 [2024-12-11 13:19:54.676929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.679 ms 00:22:03.840 [2024-12-11 13:19:54.676944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.691409] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:03.840 [2024-12-11 13:19:54.717321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.717394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.840 [2024-12-11 13:19:54.717416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.262 ms 00:22:03.840 [2024-12-11 13:19:54.717427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.817351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.817653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:03.840 [2024-12-11 13:19:54.817687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.894 ms 00:22:03.840 [2024-12-11 13:19:54.817700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.818008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.818025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.840 [2024-12-11 13:19:54.818045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:22:03.840 [2024-12-11 13:19:54.818056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.854945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.854992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:03.840 [2024-12-11 13:19:54.855013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.905 ms 00:22:03.840 [2024-12-11 13:19:54.855025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.890448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.890617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:03.840 [2024-12-11 13:19:54.890647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.366 ms 00:22:03.840 [2024-12-11 13:19:54.890657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.891542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.891568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.840 [2024-12-11 13:19:54.891584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:22:03.840 [2024-12-11 13:19:54.891595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.840 [2024-12-11 13:19:54.991924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.840 [2024-12-11 13:19:54.991996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:03.840 [2024-12-11 13:19:54.992039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.444 ms 00:22:03.841 [2024-12-11 13:19:54.992051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.841 [2024-12-11 13:19:55.030974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.841 [2024-12-11 13:19:55.031033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:03.841 [2024-12-11 13:19:55.031056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.845 ms 00:22:03.841 [2024-12-11 13:19:55.031068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.841 [2024-12-11 13:19:55.068372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.841 [2024-12-11 13:19:55.068423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:03.841 [2024-12-11 13:19:55.068442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.226 ms 00:22:03.841 [2024-12-11 13:19:55.068468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.841 [2024-12-11 13:19:55.104714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.841 [2024-12-11 13:19:55.104914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.841 [2024-12-11 13:19:55.104942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.213 ms 00:22:03.841 [2024-12-11 13:19:55.104954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.841 [2024-12-11 13:19:55.105088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.841 [2024-12-11 13:19:55.105107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.841 [2024-12-11 13:19:55.105146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:03.841 [2024-12-11 13:19:55.105175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.841 [2024-12-11 13:19:55.105275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.841 [2024-12-11 13:19:55.105287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.841 [2024-12-11 13:19:55.105302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:03.841 [2024-12-11 13:19:55.105318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.841 [2024-12-11 13:19:55.106565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.841 [2024-12-11 13:19:55.111032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4032.377 ms, result 0 00:22:03.841 [2024-12-11 13:19:55.112087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.841 { 00:22:03.841 "name": "ftl0", 00:22:03.841 "uuid": "540c5ffa-c404-4d48-a834-4d9cb8eefb38" 00:22:03.841 } 00:22:03.841 13:19:55 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:03.841 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:04.100 [ 00:22:04.100 { 00:22:04.100 "name": "ftl0", 00:22:04.100 "aliases": [ 00:22:04.100 "540c5ffa-c404-4d48-a834-4d9cb8eefb38" 00:22:04.100 ], 00:22:04.100 "product_name": "FTL disk", 00:22:04.100 "block_size": 4096, 00:22:04.100 "num_blocks": 23592960, 00:22:04.100 "uuid": "540c5ffa-c404-4d48-a834-4d9cb8eefb38", 00:22:04.100 "assigned_rate_limits": { 00:22:04.100 "rw_ios_per_sec": 0, 00:22:04.100 "rw_mbytes_per_sec": 0, 00:22:04.100 "r_mbytes_per_sec": 0, 00:22:04.100 "w_mbytes_per_sec": 0 00:22:04.100 }, 00:22:04.100 "claimed": false, 00:22:04.100 "zoned": false, 00:22:04.100 "supported_io_types": { 00:22:04.100 "read": true, 00:22:04.100 "write": true, 00:22:04.100 "unmap": true, 00:22:04.100 "flush": true, 00:22:04.100 "reset": false, 00:22:04.100 "nvme_admin": false, 00:22:04.100 "nvme_io": false, 00:22:04.100 "nvme_io_md": false, 00:22:04.100 "write_zeroes": true, 00:22:04.100 "zcopy": false, 00:22:04.100 "get_zone_info": false, 00:22:04.100 "zone_management": false, 00:22:04.100 "zone_append": false, 00:22:04.100 "compare": false, 00:22:04.100 "compare_and_write": false, 00:22:04.100 "abort": false, 00:22:04.100 "seek_hole": false, 00:22:04.100 "seek_data": false, 00:22:04.100 "copy": false, 00:22:04.100 "nvme_iov_md": false 00:22:04.100 }, 00:22:04.100 "driver_specific": { 00:22:04.100 "ftl": { 00:22:04.100 "base_bdev": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:22:04.100 "cache": "nvc0n1p0" 00:22:04.100 } 00:22:04.100 } 00:22:04.100 } 00:22:04.100 ] 00:22:04.100 13:19:55 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:04.100 13:19:55 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:04.100 13:19:55 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:04.360 13:19:55 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:04.360 13:19:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:04.620 13:19:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:04.620 { 00:22:04.620 "name": "ftl0", 00:22:04.620 "aliases": [ 00:22:04.620 "540c5ffa-c404-4d48-a834-4d9cb8eefb38" 00:22:04.620 ], 00:22:04.620 "product_name": "FTL disk", 00:22:04.620 "block_size": 4096, 00:22:04.620 "num_blocks": 23592960, 00:22:04.620 "uuid": "540c5ffa-c404-4d48-a834-4d9cb8eefb38", 00:22:04.620 "assigned_rate_limits": { 00:22:04.620 "rw_ios_per_sec": 0, 00:22:04.620 "rw_mbytes_per_sec": 0, 00:22:04.620 "r_mbytes_per_sec": 0, 00:22:04.620 "w_mbytes_per_sec": 0 00:22:04.620 }, 00:22:04.620 "claimed": false, 00:22:04.620 "zoned": false, 00:22:04.620 "supported_io_types": { 00:22:04.620 "read": true, 00:22:04.620 "write": true, 00:22:04.620 "unmap": true, 00:22:04.620 "flush": true, 00:22:04.620 "reset": false, 00:22:04.620 "nvme_admin": false, 00:22:04.620 "nvme_io": false, 00:22:04.620 "nvme_io_md": false, 00:22:04.620 "write_zeroes": true, 00:22:04.620 "zcopy": false, 00:22:04.620 "get_zone_info": false, 00:22:04.620 "zone_management": false, 00:22:04.620 "zone_append": false, 00:22:04.620 "compare": false, 00:22:04.620 "compare_and_write": false, 00:22:04.620 "abort": false, 00:22:04.620 "seek_hole": false, 00:22:04.620 "seek_data": false, 00:22:04.620 "copy": false, 00:22:04.620 "nvme_iov_md": false 00:22:04.620 }, 00:22:04.620 "driver_specific": { 00:22:04.620 "ftl": { 00:22:04.620 "base_bdev": "3f0d45c1-3115-45fa-b087-0f00e89ffc28", 00:22:04.620 "cache": "nvc0n1p0" 00:22:04.620 } 00:22:04.620 } 00:22:04.620 } 00:22:04.620 ]' 00:22:04.620 13:19:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:04.620 13:19:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:04.620 13:19:56 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:04.880 [2024-12-11 13:19:56.227829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.880 [2024-12-11 13:19:56.227903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:04.880 [2024-12-11 13:19:56.227942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:04.880 [2024-12-11 13:19:56.227962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.880 [2024-12-11 13:19:56.228004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:04.880 [2024-12-11 13:19:56.232592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.880 [2024-12-11 13:19:56.232631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:04.880 [2024-12-11 13:19:56.232653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.570 ms 00:22:04.880 [2024-12-11 13:19:56.232665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.880 [2024-12-11 13:19:56.233238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.880 [2024-12-11 13:19:56.233274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:04.880 [2024-12-11 13:19:56.233291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:22:04.880 [2024-12-11 13:19:56.233302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.880 [2024-12-11 13:19:56.236178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.880 [2024-12-11 13:19:56.236209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:04.880 [2024-12-11 13:19:56.236224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:22:04.880 [2024-12-11 13:19:56.236235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.880 [2024-12-11 13:19:56.241968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.880 [2024-12-11 13:19:56.242159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:04.880 [2024-12-11 13:19:56.242189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.684 ms 00:22:04.880 [2024-12-11 13:19:56.242201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.880 [2024-12-11 13:19:56.281085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.880 [2024-12-11 13:19:56.281137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:04.881 [2024-12-11 13:19:56.281178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.836 ms 00:22:04.881 [2024-12-11 13:19:56.281189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.881 [2024-12-11 13:19:56.304068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.881 [2024-12-11 13:19:56.304132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:04.881 [2024-12-11 13:19:56.304172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.809 ms 00:22:04.881 [2024-12-11 13:19:56.304189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.881 [2024-12-11 13:19:56.304450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.881 [2024-12-11 13:19:56.304464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:04.881 [2024-12-11 13:19:56.304479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:22:04.881 [2024-12-11 13:19:56.304491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.881 [2024-12-11 13:19:56.342896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.881 [2024-12-11 13:19:56.342948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:04.881 [2024-12-11 13:19:56.342969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.427 ms 00:22:04.881 [2024-12-11 13:19:56.342980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.881 [2024-12-11 13:19:56.380731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.881 [2024-12-11 13:19:56.380784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:04.881 [2024-12-11 13:19:56.380825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.671 ms 00:22:04.881 [2024-12-11 13:19:56.380835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.881 [2024-12-11 13:19:56.416830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.881 [2024-12-11 13:19:56.416871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:04.881 [2024-12-11 13:19:56.416889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.951 ms 00:22:04.881 [2024-12-11 13:19:56.416916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.142 [2024-12-11 13:19:56.452704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.142 [2024-12-11 13:19:56.452744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.142 [2024-12-11 13:19:56.452762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.694 ms 00:22:05.142 [2024-12-11 13:19:56.452788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.142 [2024-12-11 13:19:56.452907] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.142 [2024-12-11 13:19:56.452927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.452945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.452957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.452972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.452984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.142 [2024-12-11 13:19:56.453880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.453997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.143 [2024-12-11 13:19:56.454304] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.143 [2024-12-11 13:19:56.454321] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:22:05.143 [2024-12-11 13:19:56.454332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:05.143 [2024-12-11 13:19:56.454345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:05.143 [2024-12-11 13:19:56.454355] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:05.143 [2024-12-11 13:19:56.454374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:05.143 [2024-12-11 13:19:56.454383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.143 [2024-12-11 13:19:56.454397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.143 [2024-12-11 13:19:56.454408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.143 [2024-12-11 13:19:56.454420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.143 [2024-12-11 13:19:56.454429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.143 [2024-12-11 13:19:56.454442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.143 [2024-12-11 13:19:56.454453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.143 [2024-12-11 13:19:56.454469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:22:05.143 [2024-12-11 13:19:56.454480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.475981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.143 [2024-12-11 13:19:56.476027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.143 [2024-12-11 13:19:56.476049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.491 ms 00:22:05.143 [2024-12-11 13:19:56.476060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.476747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.143 [2024-12-11 13:19:56.476767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.143 [2024-12-11 13:19:56.476782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:22:05.143 [2024-12-11 13:19:56.476793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.550685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.143 [2024-12-11 13:19:56.550766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.143 [2024-12-11 13:19:56.550787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.143 [2024-12-11 13:19:56.550799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.550975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.143 [2024-12-11 13:19:56.550989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.143 [2024-12-11 13:19:56.551003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.143 [2024-12-11 13:19:56.551014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.551105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.143 [2024-12-11 13:19:56.551136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.143 [2024-12-11 13:19:56.551160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.143 [2024-12-11 13:19:56.551171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.551211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.143 [2024-12-11 13:19:56.551222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.143 [2024-12-11 13:19:56.551257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.143 [2024-12-11 13:19:56.551268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.143 [2024-12-11 13:19:56.694772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.143 [2024-12-11 13:19:56.694860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.143 [2024-12-11 13:19:56.694882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.143 [2024-12-11 13:19:56.694894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.403 [2024-12-11 13:19:56.802889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.403 [2024-12-11 13:19:56.802967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.403 [2024-12-11 13:19:56.803007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.403 [2024-12-11 13:19:56.803019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.403 [2024-12-11 13:19:56.803218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.403 [2024-12-11 13:19:56.803234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.403 [2024-12-11 13:19:56.803256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.403 [2024-12-11 13:19:56.803272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.403 [2024-12-11 13:19:56.803339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.403 [2024-12-11 13:19:56.803350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.403 [2024-12-11 13:19:56.803365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.404 [2024-12-11 13:19:56.803375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.404 [2024-12-11 13:19:56.803521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.404 [2024-12-11 13:19:56.803536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.404 [2024-12-11 13:19:56.803550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.404 [2024-12-11 13:19:56.803564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.404 [2024-12-11 13:19:56.803627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.404 [2024-12-11 13:19:56.803640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:05.404 [2024-12-11 13:19:56.803654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.404 [2024-12-11 13:19:56.803664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.404 [2024-12-11 13:19:56.803731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.404 [2024-12-11 13:19:56.803742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.404 [2024-12-11 13:19:56.803761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.404 [2024-12-11 13:19:56.803771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.404 [2024-12-11 13:19:56.803842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.404 [2024-12-11 13:19:56.803854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.404 [2024-12-11 13:19:56.803868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.404 [2024-12-11 13:19:56.803878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.404 [2024-12-11 13:19:56.804108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 577.176 ms, result 0 00:22:05.404 true 00:22:05.404 13:19:56 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79575 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79575 ']' 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79575 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79575 00:22:05.404 killing process with pid 79575 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79575' 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79575 00:22:05.404 13:19:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79575 00:22:11.989 13:20:02 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:11.989 65536+0 records in 00:22:11.989 65536+0 records out 00:22:11.989 268435456 bytes (268 MB, 256 MiB) copied, 1.02235 s, 263 MB/s 00:22:11.989 13:20:03 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.989 [2024-12-11 13:20:03.386948] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:22:11.989 [2024-12-11 13:20:03.387105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79786 ] 00:22:12.248 [2024-12-11 13:20:03.571483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.248 [2024-12-11 13:20:03.715821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.818 [2024-12-11 13:20:04.146215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:12.818 [2024-12-11 13:20:04.146302] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:12.818 [2024-12-11 13:20:04.314459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.314531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:12.818 [2024-12-11 13:20:04.314550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:12.818 [2024-12-11 13:20:04.314562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.318141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.318185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.818 [2024-12-11 13:20:04.318199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.563 ms 00:22:12.818 [2024-12-11 13:20:04.318211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.318321] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:12.818 [2024-12-11 13:20:04.319259] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:12.818 [2024-12-11 13:20:04.319294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.319307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.818 [2024-12-11 13:20:04.319319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:22:12.818 [2024-12-11 13:20:04.319329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.321869] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:12.818 [2024-12-11 13:20:04.342016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.342055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:12.818 [2024-12-11 13:20:04.342072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.180 ms 00:22:12.818 [2024-12-11 13:20:04.342098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.342237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.342256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:12.818 [2024-12-11 13:20:04.342269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:12.818 [2024-12-11 13:20:04.342280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.354513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.354547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.818 [2024-12-11 13:20:04.354563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.201 ms 00:22:12.818 [2024-12-11 13:20:04.354590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.354728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.354744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.818 [2024-12-11 13:20:04.354756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:12.818 [2024-12-11 13:20:04.354767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.354803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.354815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:12.818 [2024-12-11 13:20:04.354826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:12.818 [2024-12-11 13:20:04.354848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.354875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:12.818 [2024-12-11 13:20:04.360699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.360733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.818 [2024-12-11 13:20:04.360746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.841 ms 00:22:12.818 [2024-12-11 13:20:04.360757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.360827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.818 [2024-12-11 13:20:04.360843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:12.818 [2024-12-11 13:20:04.360855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:12.818 [2024-12-11 13:20:04.360865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.818 [2024-12-11 13:20:04.360893] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:12.818 [2024-12-11 13:20:04.360937] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:12.818 [2024-12-11 13:20:04.360979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:12.818 [2024-12-11 13:20:04.360999] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:12.818 [2024-12-11 13:20:04.361091] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:12.818 [2024-12-11 13:20:04.361104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:12.818 [2024-12-11 13:20:04.361134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:12.818 [2024-12-11 13:20:04.361169] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361195] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:12.819 [2024-12-11 13:20:04.361206] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:12.819 [2024-12-11 13:20:04.361217] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:12.819 [2024-12-11 13:20:04.361227] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:12.819 [2024-12-11 13:20:04.361240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.819 [2024-12-11 13:20:04.361251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:12.819 [2024-12-11 13:20:04.361262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:22:12.819 [2024-12-11 13:20:04.361272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.819 [2024-12-11 13:20:04.361353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.819 [2024-12-11 13:20:04.361369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:12.819 [2024-12-11 13:20:04.361380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:12.819 [2024-12-11 13:20:04.361390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.819 [2024-12-11 13:20:04.361482] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:12.819 [2024-12-11 13:20:04.361495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:12.819 [2024-12-11 13:20:04.361506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:12.819 [2024-12-11 13:20:04.361538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:12.819 [2024-12-11 13:20:04.361574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.819 [2024-12-11 13:20:04.361597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:12.819 [2024-12-11 13:20:04.361618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:12.819 [2024-12-11 13:20:04.361628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.819 [2024-12-11 13:20:04.361637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:12.819 [2024-12-11 13:20:04.361647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:12.819 [2024-12-11 13:20:04.361657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:12.819 [2024-12-11 13:20:04.361676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:12.819 [2024-12-11 13:20:04.361705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:12.819 [2024-12-11 13:20:04.361733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:12.819 [2024-12-11 13:20:04.361761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:12.819 [2024-12-11 13:20:04.361788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:12.819 [2024-12-11 13:20:04.361815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.819 [2024-12-11 13:20:04.361833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:12.819 [2024-12-11 13:20:04.361842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:12.819 [2024-12-11 13:20:04.361850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.819 [2024-12-11 13:20:04.361860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:12.819 [2024-12-11 13:20:04.361869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:12.819 [2024-12-11 13:20:04.361878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:12.819 [2024-12-11 13:20:04.361897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:12.819 [2024-12-11 13:20:04.361906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361915] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:12.819 [2024-12-11 13:20:04.361925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:12.819 [2024-12-11 13:20:04.361939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.819 [2024-12-11 13:20:04.361949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.819 [2024-12-11 13:20:04.361960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:12.819 [2024-12-11 13:20:04.361969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:12.819 [2024-12-11 13:20:04.361979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:12.819 [2024-12-11 13:20:04.361988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:12.819 [2024-12-11 13:20:04.361997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:12.819 [2024-12-11 13:20:04.362007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:12.819 [2024-12-11 13:20:04.362018] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:12.819 [2024-12-11 13:20:04.362031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:12.819 [2024-12-11 13:20:04.362053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:12.819 [2024-12-11 13:20:04.362063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:12.819 [2024-12-11 13:20:04.362073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:12.819 [2024-12-11 13:20:04.362084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:12.819 [2024-12-11 13:20:04.362094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:12.819 [2024-12-11 13:20:04.362105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:12.819 [2024-12-11 13:20:04.362125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:12.819 [2024-12-11 13:20:04.362136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:12.819 [2024-12-11 13:20:04.362147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:12.819 [2024-12-11 13:20:04.362199] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:12.819 [2024-12-11 13:20:04.362211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:12.819 [2024-12-11 13:20:04.362235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:12.819 [2024-12-11 13:20:04.362247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:12.819 [2024-12-11 13:20:04.362258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:12.819 [2024-12-11 13:20:04.362270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.819 [2024-12-11 13:20:04.362286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:12.819 [2024-12-11 13:20:04.362296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:22:12.819 [2024-12-11 13:20:04.362306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.079 [2024-12-11 13:20:04.412549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.079 [2024-12-11 13:20:04.412607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.079 [2024-12-11 13:20:04.412626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.254 ms 00:22:13.079 [2024-12-11 13:20:04.412638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.079 [2024-12-11 13:20:04.412860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.079 [2024-12-11 13:20:04.412874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.079 [2024-12-11 13:20:04.412887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:13.079 [2024-12-11 13:20:04.412898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.079 [2024-12-11 13:20:04.476605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.079 [2024-12-11 13:20:04.476671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.079 [2024-12-11 13:20:04.476688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.778 ms 00:22:13.079 [2024-12-11 13:20:04.476715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.079 [2024-12-11 13:20:04.476859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.079 [2024-12-11 13:20:04.476873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.079 [2024-12-11 13:20:04.476886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.079 [2024-12-11 13:20:04.476897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.079 [2024-12-11 13:20:04.477684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.079 [2024-12-11 13:20:04.477706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.080 [2024-12-11 13:20:04.477718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:22:13.080 [2024-12-11 13:20:04.477734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.477878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.477892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.080 [2024-12-11 13:20:04.477904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:13.080 [2024-12-11 13:20:04.477914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.502052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.502106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.080 [2024-12-11 13:20:04.502136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.148 ms 00:22:13.080 [2024-12-11 13:20:04.502148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.522652] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:13.080 [2024-12-11 13:20:04.522708] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.080 [2024-12-11 13:20:04.522726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.522738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.080 [2024-12-11 13:20:04.522752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.416 ms 00:22:13.080 [2024-12-11 13:20:04.522762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.553013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.553064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.080 [2024-12-11 13:20:04.553081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.179 ms 00:22:13.080 [2024-12-11 13:20:04.553108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.571936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.571981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.080 [2024-12-11 13:20:04.571997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.742 ms 00:22:13.080 [2024-12-11 13:20:04.572008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.590689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.590742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.080 [2024-12-11 13:20:04.590757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.617 ms 00:22:13.080 [2024-12-11 13:20:04.590783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.080 [2024-12-11 13:20:04.591740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.080 [2024-12-11 13:20:04.591857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.080 [2024-12-11 13:20:04.591938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:22:13.080 [2024-12-11 13:20:04.591974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.688986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.689325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.339 [2024-12-11 13:20:04.689356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.107 ms 00:22:13.339 [2024-12-11 13:20:04.689369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.701315] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:13.339 [2024-12-11 13:20:04.727688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.727770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.339 [2024-12-11 13:20:04.727790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.167 ms 00:22:13.339 [2024-12-11 13:20:04.727802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.728003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.728019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.339 [2024-12-11 13:20:04.728032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:13.339 [2024-12-11 13:20:04.728043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.728139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.728169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.339 [2024-12-11 13:20:04.728197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:13.339 [2024-12-11 13:20:04.728208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.728256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.728276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.339 [2024-12-11 13:20:04.728287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:13.339 [2024-12-11 13:20:04.728298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.728380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.339 [2024-12-11 13:20:04.728400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.728411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.339 [2024-12-11 13:20:04.728423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:13.339 [2024-12-11 13:20:04.728434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.766052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.766100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.339 [2024-12-11 13:20:04.766127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.653 ms 00:22:13.339 [2024-12-11 13:20:04.766139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.766264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.339 [2024-12-11 13:20:04.766278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.339 [2024-12-11 13:20:04.766289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:13.339 [2024-12-11 13:20:04.766300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.339 [2024-12-11 13:20:04.767687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.339 [2024-12-11 13:20:04.772343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 453.615 ms, result 0 00:22:13.339 [2024-12-11 13:20:04.773344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:13.339 [2024-12-11 13:20:04.792315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.276  [2024-12-11T13:20:07.222Z] Copying: 23/256 [MB] (23 MBps) [2024-12-11T13:20:08.160Z] Copying: 45/256 [MB] (22 MBps) [2024-12-11T13:20:09.097Z] Copying: 68/256 [MB] (22 MBps) [2024-12-11T13:20:10.034Z] Copying: 91/256 [MB] (22 MBps) [2024-12-11T13:20:10.971Z] Copying: 114/256 [MB] (22 MBps) [2024-12-11T13:20:11.907Z] Copying: 136/256 [MB] (22 MBps) [2024-12-11T13:20:12.844Z] Copying: 160/256 [MB] (23 MBps) [2024-12-11T13:20:14.223Z] Copying: 183/256 [MB] (23 MBps) [2024-12-11T13:20:14.791Z] Copying: 206/256 [MB] (23 MBps) [2024-12-11T13:20:16.169Z] Copying: 230/256 [MB] (23 MBps) [2024-12-11T13:20:16.169Z] Copying: 253/256 [MB] (23 MBps) [2024-12-11T13:20:16.169Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-11 13:20:15.882707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.601 [2024-12-11 13:20:15.898176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.898349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:24.601 [2024-12-11 13:20:15.898478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.601 [2024-12-11 13:20:15.898518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.898580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:24.601 [2024-12-11 13:20:15.903302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.903425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:24.601 [2024-12-11 13:20:15.903579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:22:24.601 [2024-12-11 13:20:15.903595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.905651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.905705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:24.601 [2024-12-11 13:20:15.905719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:22:24.601 [2024-12-11 13:20:15.905730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.912740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.912783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:24.601 [2024-12-11 13:20:15.912795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:22:24.601 [2024-12-11 13:20:15.912806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.918332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.918364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:24.601 [2024-12-11 13:20:15.918376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.481 ms 00:22:24.601 [2024-12-11 13:20:15.918386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.954803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.954841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:24.601 [2024-12-11 13:20:15.954856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.422 ms 00:22:24.601 [2024-12-11 13:20:15.954866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.975997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.976041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:24.601 [2024-12-11 13:20:15.976060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.111 ms 00:22:24.601 [2024-12-11 13:20:15.976086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:15.976255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:15.976269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:24.601 [2024-12-11 13:20:15.976281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:24.601 [2024-12-11 13:20:15.976321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:16.011502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:16.011553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:24.601 [2024-12-11 13:20:16.011566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.218 ms 00:22:24.601 [2024-12-11 13:20:16.011577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:16.046878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:16.046928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:24.601 [2024-12-11 13:20:16.046941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.300 ms 00:22:24.601 [2024-12-11 13:20:16.046967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:16.081967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:16.082010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:24.601 [2024-12-11 13:20:16.082024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.001 ms 00:22:24.601 [2024-12-11 13:20:16.082036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:16.117368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.601 [2024-12-11 13:20:16.117426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:24.601 [2024-12-11 13:20:16.117441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.286 ms 00:22:24.601 [2024-12-11 13:20:16.117451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.601 [2024-12-11 13:20:16.117545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:24.601 [2024-12-11 13:20:16.117592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:24.601 [2024-12-11 13:20:16.117824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.117989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:24.602 [2024-12-11 13:20:16.118782] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:24.602 [2024-12-11 13:20:16.118793] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:22:24.602 [2024-12-11 13:20:16.118805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:24.602 [2024-12-11 13:20:16.118816] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:24.602 [2024-12-11 13:20:16.118826] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:24.602 [2024-12-11 13:20:16.118836] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:24.602 [2024-12-11 13:20:16.118846] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:24.602 [2024-12-11 13:20:16.118857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:24.602 [2024-12-11 13:20:16.118868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:24.602 [2024-12-11 13:20:16.118877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:24.602 [2024-12-11 13:20:16.118887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:24.602 [2024-12-11 13:20:16.118898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.602 [2024-12-11 13:20:16.118914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:24.603 [2024-12-11 13:20:16.118925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:22:24.603 [2024-12-11 13:20:16.118936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-11 13:20:16.139164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-11 13:20:16.139199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:24.603 [2024-12-11 13:20:16.139212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.236 ms 00:22:24.603 [2024-12-11 13:20:16.139239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-11 13:20:16.139741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-11 13:20:16.139754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:24.603 [2024-12-11 13:20:16.139765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:22:24.603 [2024-12-11 13:20:16.139775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.862 [2024-12-11 13:20:16.197894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.862 [2024-12-11 13:20:16.198098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.862 [2024-12-11 13:20:16.198135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.862 [2024-12-11 13:20:16.198149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.862 [2024-12-11 13:20:16.198256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.862 [2024-12-11 13:20:16.198268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.862 [2024-12-11 13:20:16.198280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.862 [2024-12-11 13:20:16.198291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.862 [2024-12-11 13:20:16.198353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.862 [2024-12-11 13:20:16.198367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.862 [2024-12-11 13:20:16.198378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.862 [2024-12-11 13:20:16.198389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.862 [2024-12-11 13:20:16.198410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.862 [2024-12-11 13:20:16.198426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.862 [2024-12-11 13:20:16.198438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.862 [2024-12-11 13:20:16.198449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.862 [2024-12-11 13:20:16.331685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.862 [2024-12-11 13:20:16.331750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.862 [2024-12-11 13:20:16.331768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.862 [2024-12-11 13:20:16.331779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.121 [2024-12-11 13:20:16.440544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.121 [2024-12-11 13:20:16.440613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.121 [2024-12-11 13:20:16.440631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.121 [2024-12-11 13:20:16.440642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.121 [2024-12-11 13:20:16.440771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.121 [2024-12-11 13:20:16.440784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.121 [2024-12-11 13:20:16.440796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.121 [2024-12-11 13:20:16.440807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.121 [2024-12-11 13:20:16.440839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.121 [2024-12-11 13:20:16.440850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.121 [2024-12-11 13:20:16.440866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.121 [2024-12-11 13:20:16.440877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.121 [2024-12-11 13:20:16.441005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.121 [2024-12-11 13:20:16.441019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.121 [2024-12-11 13:20:16.441031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.121 [2024-12-11 13:20:16.441041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.122 [2024-12-11 13:20:16.441080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.122 [2024-12-11 13:20:16.441093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:25.122 [2024-12-11 13:20:16.441104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.122 [2024-12-11 13:20:16.441140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.122 [2024-12-11 13:20:16.441205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.122 [2024-12-11 13:20:16.441217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.122 [2024-12-11 13:20:16.441228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.122 [2024-12-11 13:20:16.441255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.122 [2024-12-11 13:20:16.441305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.122 [2024-12-11 13:20:16.441318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.122 [2024-12-11 13:20:16.441334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.122 [2024-12-11 13:20:16.441345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.122 [2024-12-11 13:20:16.441516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.222 ms, result 0 00:22:26.500 00:22:26.500 00:22:26.500 13:20:17 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79935 00:22:26.500 13:20:17 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:26.500 13:20:17 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79935 00:22:26.500 13:20:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79935 ']' 00:22:26.500 13:20:17 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.500 13:20:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.500 13:20:17 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.500 13:20:17 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.500 13:20:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:26.500 [2024-12-11 13:20:17.871885] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:22:26.500 [2024-12-11 13:20:17.872735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79935 ] 00:22:26.500 [2024-12-11 13:20:18.059594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.759 [2024-12-11 13:20:18.195872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.751 13:20:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.751 13:20:19 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:27.751 13:20:19 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:28.044 [2024-12-11 13:20:19.406635] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.044 [2024-12-11 13:20:19.406726] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.044 [2024-12-11 13:20:19.591278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.044 [2024-12-11 13:20:19.591344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:28.044 [2024-12-11 13:20:19.591367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:28.044 [2024-12-11 13:20:19.591379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.044 [2024-12-11 13:20:19.595463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.044 [2024-12-11 13:20:19.595515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.044 [2024-12-11 13:20:19.595532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.068 ms 00:22:28.044 [2024-12-11 13:20:19.595543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.044 [2024-12-11 13:20:19.595665] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:28.044 [2024-12-11 13:20:19.596730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:28.044 [2024-12-11 13:20:19.596768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.044 [2024-12-11 13:20:19.596781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.044 [2024-12-11 13:20:19.596794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:22:28.044 [2024-12-11 13:20:19.596805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.044 [2024-12-11 13:20:19.599359] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:28.304 [2024-12-11 13:20:19.619940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.304 [2024-12-11 13:20:19.619984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:28.305 [2024-12-11 13:20:19.620015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.617 ms 00:22:28.305 [2024-12-11 13:20:19.620035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.620157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.620178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:28.305 [2024-12-11 13:20:19.620190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:28.305 [2024-12-11 13:20:19.620206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.632549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.632597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.305 [2024-12-11 13:20:19.632628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.301 ms 00:22:28.305 [2024-12-11 13:20:19.632644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.632819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.632840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.305 [2024-12-11 13:20:19.632852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:22:28.305 [2024-12-11 13:20:19.632876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.632910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.632929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:28.305 [2024-12-11 13:20:19.632940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:28.305 [2024-12-11 13:20:19.632956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.632987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:28.305 [2024-12-11 13:20:19.638655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.638831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.305 [2024-12-11 13:20:19.638862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.679 ms 00:22:28.305 [2024-12-11 13:20:19.638874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.638951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.638964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:28.305 [2024-12-11 13:20:19.638981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:28.305 [2024-12-11 13:20:19.638998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.639029] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:28.305 [2024-12-11 13:20:19.639062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:28.305 [2024-12-11 13:20:19.639137] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:28.305 [2024-12-11 13:20:19.639161] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:28.305 [2024-12-11 13:20:19.639263] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:28.305 [2024-12-11 13:20:19.639277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:28.305 [2024-12-11 13:20:19.639304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:28.305 [2024-12-11 13:20:19.639318] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:28.305 [2024-12-11 13:20:19.639336] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:28.305 [2024-12-11 13:20:19.639348] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:28.305 [2024-12-11 13:20:19.639364] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:28.305 [2024-12-11 13:20:19.639375] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:28.305 [2024-12-11 13:20:19.639397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:28.305 [2024-12-11 13:20:19.639408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.639424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:28.305 [2024-12-11 13:20:19.639436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:22:28.305 [2024-12-11 13:20:19.639452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.639537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.305 [2024-12-11 13:20:19.639553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:28.305 [2024-12-11 13:20:19.639564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:28.305 [2024-12-11 13:20:19.639580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.305 [2024-12-11 13:20:19.639675] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:28.305 [2024-12-11 13:20:19.639694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:28.305 [2024-12-11 13:20:19.639705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.305 [2024-12-11 13:20:19.639721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.639732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:28.305 [2024-12-11 13:20:19.639750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.639760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:28.305 [2024-12-11 13:20:19.639780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:28.305 [2024-12-11 13:20:19.639791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:28.305 [2024-12-11 13:20:19.639807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.305 [2024-12-11 13:20:19.639817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:28.305 [2024-12-11 13:20:19.639832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:28.305 [2024-12-11 13:20:19.639842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.305 [2024-12-11 13:20:19.639857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:28.305 [2024-12-11 13:20:19.639868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:28.305 [2024-12-11 13:20:19.639883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.639893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:28.305 [2024-12-11 13:20:19.639907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:28.305 [2024-12-11 13:20:19.639929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.639944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:28.305 [2024-12-11 13:20:19.639954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:28.305 [2024-12-11 13:20:19.639968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.305 [2024-12-11 13:20:19.639978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:28.305 [2024-12-11 13:20:19.639998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.305 [2024-12-11 13:20:19.640023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:28.305 [2024-12-11 13:20:19.640033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.305 [2024-12-11 13:20:19.640059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:28.305 [2024-12-11 13:20:19.640075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.305 [2024-12-11 13:20:19.640101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:28.305 [2024-12-11 13:20:19.640127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.305 [2024-12-11 13:20:19.640153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:28.305 [2024-12-11 13:20:19.640168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:28.305 [2024-12-11 13:20:19.640178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.305 [2024-12-11 13:20:19.640194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:28.305 [2024-12-11 13:20:19.640204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:28.305 [2024-12-11 13:20:19.640224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:28.305 [2024-12-11 13:20:19.640259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:28.305 [2024-12-11 13:20:19.640269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640301] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:28.305 [2024-12-11 13:20:19.640318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:28.305 [2024-12-11 13:20:19.640334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.305 [2024-12-11 13:20:19.640345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.305 [2024-12-11 13:20:19.640361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:28.305 [2024-12-11 13:20:19.640372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:28.305 [2024-12-11 13:20:19.640387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:28.305 [2024-12-11 13:20:19.640397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:28.305 [2024-12-11 13:20:19.640420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:28.305 [2024-12-11 13:20:19.640429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:28.305 [2024-12-11 13:20:19.640447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:28.305 [2024-12-11 13:20:19.640460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.305 [2024-12-11 13:20:19.640485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:28.306 [2024-12-11 13:20:19.640496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:28.306 [2024-12-11 13:20:19.640510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:28.306 [2024-12-11 13:20:19.640521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:28.306 [2024-12-11 13:20:19.640535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:28.306 [2024-12-11 13:20:19.640546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:28.306 [2024-12-11 13:20:19.640560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:28.306 [2024-12-11 13:20:19.640571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:28.306 [2024-12-11 13:20:19.640586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:28.306 [2024-12-11 13:20:19.640596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:28.306 [2024-12-11 13:20:19.640610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:28.306 [2024-12-11 13:20:19.640620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:28.306 [2024-12-11 13:20:19.640633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:28.306 [2024-12-11 13:20:19.640644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:28.306 [2024-12-11 13:20:19.640657] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:28.306 [2024-12-11 13:20:19.640669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.306 [2024-12-11 13:20:19.640687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:28.306 [2024-12-11 13:20:19.640698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:28.306 [2024-12-11 13:20:19.640711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:28.306 [2024-12-11 13:20:19.640722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:28.306 [2024-12-11 13:20:19.640736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.640750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:28.306 [2024-12-11 13:20:19.640764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:22:28.306 [2024-12-11 13:20:19.640776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.690831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.690893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.306 [2024-12-11 13:20:19.690931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.060 ms 00:22:28.306 [2024-12-11 13:20:19.690949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.691160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.691176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:28.306 [2024-12-11 13:20:19.691194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:28.306 [2024-12-11 13:20:19.691205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.745710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.745762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.306 [2024-12-11 13:20:19.745783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.554 ms 00:22:28.306 [2024-12-11 13:20:19.745795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.745907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.745920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.306 [2024-12-11 13:20:19.745937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:28.306 [2024-12-11 13:20:19.745949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.746702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.746721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.306 [2024-12-11 13:20:19.746745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:22:28.306 [2024-12-11 13:20:19.746755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.746901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.746920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.306 [2024-12-11 13:20:19.746937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:28.306 [2024-12-11 13:20:19.746947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.774025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.774240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.306 [2024-12-11 13:20:19.774276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.085 ms 00:22:28.306 [2024-12-11 13:20:19.774290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.805744] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:28.306 [2024-12-11 13:20:19.805785] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:28.306 [2024-12-11 13:20:19.805807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.805819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:28.306 [2024-12-11 13:20:19.805837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.399 ms 00:22:28.306 [2024-12-11 13:20:19.805863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.834933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.834970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:28.306 [2024-12-11 13:20:19.834991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.001 ms 00:22:28.306 [2024-12-11 13:20:19.835018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-12-11 13:20:19.852823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-12-11 13:20:19.852991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:28.306 [2024-12-11 13:20:19.853027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.737 ms 00:22:28.306 [2024-12-11 13:20:19.853039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:19.870882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:19.871015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:28.566 [2024-12-11 13:20:19.871044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.711 ms 00:22:28.566 [2024-12-11 13:20:19.871056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:19.871916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:19.871958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:28.566 [2024-12-11 13:20:19.871974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:22:28.566 [2024-12-11 13:20:19.871984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:19.966925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:19.967031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:28.566 [2024-12-11 13:20:19.967075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.042 ms 00:22:28.566 [2024-12-11 13:20:19.967087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:19.978158] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:28.566 [2024-12-11 13:20:20.003221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.003314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.566 [2024-12-11 13:20:20.003334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.011 ms 00:22:28.566 [2024-12-11 13:20:20.003351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.003502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.003523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:28.566 [2024-12-11 13:20:20.003538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:28.566 [2024-12-11 13:20:20.003555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.003627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.003646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.566 [2024-12-11 13:20:20.003670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:28.566 [2024-12-11 13:20:20.003686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.003716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.003737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.566 [2024-12-11 13:20:20.003748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:28.566 [2024-12-11 13:20:20.003765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.003814] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:28.566 [2024-12-11 13:20:20.003846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.003858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:28.566 [2024-12-11 13:20:20.003876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:28.566 [2024-12-11 13:20:20.003893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.042059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.042140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.566 [2024-12-11 13:20:20.042167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.183 ms 00:22:28.566 [2024-12-11 13:20:20.042180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.042338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-12-11 13:20:20.042353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.566 [2024-12-11 13:20:20.042380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:28.566 [2024-12-11 13:20:20.042391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-12-11 13:20:20.043953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:28.566 [2024-12-11 13:20:20.049575] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.863 ms, result 0 00:22:28.566 [2024-12-11 13:20:20.050818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:28.566 Some configs were skipped because the RPC state that can call them passed over. 00:22:28.566 13:20:20 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:28.826 [2024-12-11 13:20:20.311200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.826 [2024-12-11 13:20:20.311424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:28.826 [2024-12-11 13:20:20.311536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:22:28.826 [2024-12-11 13:20:20.311586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.826 [2024-12-11 13:20:20.311668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.310 ms, result 0 00:22:28.826 true 00:22:28.826 13:20:20 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:29.085 [2024-12-11 13:20:20.526829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.085 [2024-12-11 13:20:20.526893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:29.085 [2024-12-11 13:20:20.526918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:22:29.085 [2024-12-11 13:20:20.526929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.085 [2024-12-11 13:20:20.526984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.630 ms, result 0 00:22:29.085 true 00:22:29.085 13:20:20 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79935 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79935 ']' 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79935 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79935 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.085 killing process with pid 79935 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79935' 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79935 00:22:29.085 13:20:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79935 00:22:30.465 [2024-12-11 13:20:21.806723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.806803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:30.465 [2024-12-11 13:20:21.806831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:30.465 [2024-12-11 13:20:21.806862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.806888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:30.465 [2024-12-11 13:20:21.811567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.811603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:30.465 [2024-12-11 13:20:21.811622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:22:30.465 [2024-12-11 13:20:21.811632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.811929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.811942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:30.465 [2024-12-11 13:20:21.811955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:22:30.465 [2024-12-11 13:20:21.811966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.815386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.815426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:30.465 [2024-12-11 13:20:21.815443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:22:30.465 [2024-12-11 13:20:21.815453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.820914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.821070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:30.465 [2024-12-11 13:20:21.821115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.426 ms 00:22:30.465 [2024-12-11 13:20:21.821126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.836483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.836536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:30.465 [2024-12-11 13:20:21.836554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.271 ms 00:22:30.465 [2024-12-11 13:20:21.836580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.847196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.847229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:30.465 [2024-12-11 13:20:21.847245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.556 ms 00:22:30.465 [2024-12-11 13:20:21.847256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.847413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.847427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:30.465 [2024-12-11 13:20:21.847441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:30.465 [2024-12-11 13:20:21.847451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.862990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.465 [2024-12-11 13:20:21.863128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:30.465 [2024-12-11 13:20:21.863176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.540 ms 00:22:30.465 [2024-12-11 13:20:21.863187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.465 [2024-12-11 13:20:21.878035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.466 [2024-12-11 13:20:21.878187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:30.466 [2024-12-11 13:20:21.878223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.811 ms 00:22:30.466 [2024-12-11 13:20:21.878234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.466 [2024-12-11 13:20:21.892454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.466 [2024-12-11 13:20:21.892606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:30.466 [2024-12-11 13:20:21.892636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.182 ms 00:22:30.466 [2024-12-11 13:20:21.892646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.466 [2024-12-11 13:20:21.906895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.466 [2024-12-11 13:20:21.907025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:30.466 [2024-12-11 13:20:21.907053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.178 ms 00:22:30.466 [2024-12-11 13:20:21.907064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.466 [2024-12-11 13:20:21.907128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:30.466 [2024-12-11 13:20:21.907148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.907993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:30.466 [2024-12-11 13:20:21.908299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:30.467 [2024-12-11 13:20:21.908559] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:30.467 [2024-12-11 13:20:21.908585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:22:30.467 [2024-12-11 13:20:21.908597] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:30.467 [2024-12-11 13:20:21.908612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:30.467 [2024-12-11 13:20:21.908623] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:30.467 [2024-12-11 13:20:21.908639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:30.467 [2024-12-11 13:20:21.908649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:30.467 [2024-12-11 13:20:21.908666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:30.467 [2024-12-11 13:20:21.908677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:30.467 [2024-12-11 13:20:21.908692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:30.467 [2024-12-11 13:20:21.908702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:30.467 [2024-12-11 13:20:21.908717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.467 [2024-12-11 13:20:21.908729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:30.467 [2024-12-11 13:20:21.908745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:22:30.467 [2024-12-11 13:20:21.908761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.467 [2024-12-11 13:20:21.929503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.467 [2024-12-11 13:20:21.929658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:30.467 [2024-12-11 13:20:21.929709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.742 ms 00:22:30.467 [2024-12-11 13:20:21.929721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.467 [2024-12-11 13:20:21.930439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.467 [2024-12-11 13:20:21.930463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:30.467 [2024-12-11 13:20:21.930480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:22:30.467 [2024-12-11 13:20:21.930490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.467 [2024-12-11 13:20:22.002746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.467 [2024-12-11 13:20:22.002991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.467 [2024-12-11 13:20:22.003027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.467 [2024-12-11 13:20:22.003040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.467 [2024-12-11 13:20:22.003224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.467 [2024-12-11 13:20:22.003247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.467 [2024-12-11 13:20:22.003264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.467 [2024-12-11 13:20:22.003276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.467 [2024-12-11 13:20:22.003346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.467 [2024-12-11 13:20:22.003361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.467 [2024-12-11 13:20:22.003385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.467 [2024-12-11 13:20:22.003396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.467 [2024-12-11 13:20:22.003423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.467 [2024-12-11 13:20:22.003434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.467 [2024-12-11 13:20:22.003450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.467 [2024-12-11 13:20:22.003467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.133795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.133880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.727 [2024-12-11 13:20:22.133904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.133915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.237643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.237723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:30.727 [2024-12-11 13:20:22.237769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.237780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.237932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.237945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:30.727 [2024-12-11 13:20:22.237969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.237980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.238021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.238034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:30.727 [2024-12-11 13:20:22.238050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.238061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.238207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.238223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:30.727 [2024-12-11 13:20:22.238238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.238248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.238298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.238311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:30.727 [2024-12-11 13:20:22.238326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.238336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.238392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.238405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:30.727 [2024-12-11 13:20:22.238423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.238434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.238488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.727 [2024-12-11 13:20:22.238501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:30.727 [2024-12-11 13:20:22.238514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.727 [2024-12-11 13:20:22.238525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.727 [2024-12-11 13:20:22.238698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 432.640 ms, result 0 00:22:32.107 13:20:23 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:32.107 13:20:23 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:32.107 [2024-12-11 13:20:23.452841] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:22:32.107 [2024-12-11 13:20:23.452999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80005 ] 00:22:32.107 [2024-12-11 13:20:23.638593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.367 [2024-12-11 13:20:23.783380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.936 [2024-12-11 13:20:24.197895] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:32.936 [2024-12-11 13:20:24.197983] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:32.936 [2024-12-11 13:20:24.364463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.936 [2024-12-11 13:20:24.364533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:32.936 [2024-12-11 13:20:24.364550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:32.936 [2024-12-11 13:20:24.364577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.936 [2024-12-11 13:20:24.368032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.936 [2024-12-11 13:20:24.368074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.936 [2024-12-11 13:20:24.368087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:22:32.936 [2024-12-11 13:20:24.368113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.936 [2024-12-11 13:20:24.368244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:32.937 [2024-12-11 13:20:24.369326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:32.937 [2024-12-11 13:20:24.369361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.369375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.937 [2024-12-11 13:20:24.369386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:22:32.937 [2024-12-11 13:20:24.369396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.372081] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:32.937 [2024-12-11 13:20:24.391408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.391444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:32.937 [2024-12-11 13:20:24.391459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.359 ms 00:22:32.937 [2024-12-11 13:20:24.391486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.391591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.391606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:32.937 [2024-12-11 13:20:24.391619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:32.937 [2024-12-11 13:20:24.391629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.403318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.403346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.937 [2024-12-11 13:20:24.403359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.661 ms 00:22:32.937 [2024-12-11 13:20:24.403384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.403510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.403526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.937 [2024-12-11 13:20:24.403537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:32.937 [2024-12-11 13:20:24.403549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.403585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.403596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:32.937 [2024-12-11 13:20:24.403608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:32.937 [2024-12-11 13:20:24.403618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.403645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:32.937 [2024-12-11 13:20:24.409374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.409405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.937 [2024-12-11 13:20:24.409419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.746 ms 00:22:32.937 [2024-12-11 13:20:24.409446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.409504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.409517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:32.937 [2024-12-11 13:20:24.409529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:32.937 [2024-12-11 13:20:24.409539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.409574] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:32.937 [2024-12-11 13:20:24.409600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:32.937 [2024-12-11 13:20:24.409639] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:32.937 [2024-12-11 13:20:24.409659] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:32.937 [2024-12-11 13:20:24.409754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:32.937 [2024-12-11 13:20:24.409768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:32.937 [2024-12-11 13:20:24.409783] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:32.937 [2024-12-11 13:20:24.409801] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:32.937 [2024-12-11 13:20:24.409815] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:32.937 [2024-12-11 13:20:24.409827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:32.937 [2024-12-11 13:20:24.409838] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:32.937 [2024-12-11 13:20:24.409848] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:32.937 [2024-12-11 13:20:24.409859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:32.937 [2024-12-11 13:20:24.409871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.409881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:32.937 [2024-12-11 13:20:24.409893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:32.937 [2024-12-11 13:20:24.409903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.409983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.937 [2024-12-11 13:20:24.409997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:32.937 [2024-12-11 13:20:24.410008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:32.937 [2024-12-11 13:20:24.410018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.937 [2024-12-11 13:20:24.410127] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:32.937 [2024-12-11 13:20:24.410143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:32.937 [2024-12-11 13:20:24.410155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:32.937 [2024-12-11 13:20:24.410187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:32.937 [2024-12-11 13:20:24.410218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.937 [2024-12-11 13:20:24.410258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:32.937 [2024-12-11 13:20:24.410281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:32.937 [2024-12-11 13:20:24.410291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.937 [2024-12-11 13:20:24.410301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:32.937 [2024-12-11 13:20:24.410311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:32.937 [2024-12-11 13:20:24.410321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:32.937 [2024-12-11 13:20:24.410340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:32.937 [2024-12-11 13:20:24.410369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:32.937 [2024-12-11 13:20:24.410397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:32.937 [2024-12-11 13:20:24.410430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:32.937 [2024-12-11 13:20:24.410457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:32.937 [2024-12-11 13:20:24.410483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.937 [2024-12-11 13:20:24.410502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:32.937 [2024-12-11 13:20:24.410511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:32.937 [2024-12-11 13:20:24.410520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.937 [2024-12-11 13:20:24.410529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:32.937 [2024-12-11 13:20:24.410538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:32.937 [2024-12-11 13:20:24.410548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:32.937 [2024-12-11 13:20:24.410566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:32.937 [2024-12-11 13:20:24.410577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410586] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:32.937 [2024-12-11 13:20:24.410597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:32.937 [2024-12-11 13:20:24.410612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.937 [2024-12-11 13:20:24.410623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.937 [2024-12-11 13:20:24.410634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:32.937 [2024-12-11 13:20:24.410644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:32.937 [2024-12-11 13:20:24.410654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:32.937 [2024-12-11 13:20:24.410663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:32.938 [2024-12-11 13:20:24.410672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:32.938 [2024-12-11 13:20:24.410682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:32.938 [2024-12-11 13:20:24.410693] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:32.938 [2024-12-11 13:20:24.410706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:32.938 [2024-12-11 13:20:24.410729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:32.938 [2024-12-11 13:20:24.410740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:32.938 [2024-12-11 13:20:24.410751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:32.938 [2024-12-11 13:20:24.410762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:32.938 [2024-12-11 13:20:24.410772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:32.938 [2024-12-11 13:20:24.410783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:32.938 [2024-12-11 13:20:24.410793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:32.938 [2024-12-11 13:20:24.410804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:32.938 [2024-12-11 13:20:24.410814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:32.938 [2024-12-11 13:20:24.410870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:32.938 [2024-12-11 13:20:24.410882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:32.938 [2024-12-11 13:20:24.410903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:32.938 [2024-12-11 13:20:24.410914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:32.938 [2024-12-11 13:20:24.410925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:32.938 [2024-12-11 13:20:24.410936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.938 [2024-12-11 13:20:24.410952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:32.938 [2024-12-11 13:20:24.410963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:22:32.938 [2024-12-11 13:20:24.410974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.938 [2024-12-11 13:20:24.459387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.938 [2024-12-11 13:20:24.459445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.938 [2024-12-11 13:20:24.459461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.424 ms 00:22:32.938 [2024-12-11 13:20:24.459489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.938 [2024-12-11 13:20:24.459682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.938 [2024-12-11 13:20:24.459696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:32.938 [2024-12-11 13:20:24.459708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:32.938 [2024-12-11 13:20:24.459719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.523278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.523349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.198 [2024-12-11 13:20:24.523365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.634 ms 00:22:33.198 [2024-12-11 13:20:24.523392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.523507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.523521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.198 [2024-12-11 13:20:24.523533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:33.198 [2024-12-11 13:20:24.523544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.524269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.524283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.198 [2024-12-11 13:20:24.524295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:22:33.198 [2024-12-11 13:20:24.524326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.524466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.524481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.198 [2024-12-11 13:20:24.524492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:33.198 [2024-12-11 13:20:24.524503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.547970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.548019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.198 [2024-12-11 13:20:24.548035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.478 ms 00:22:33.198 [2024-12-11 13:20:24.548061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.567779] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:33.198 [2024-12-11 13:20:24.567819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:33.198 [2024-12-11 13:20:24.567836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.567864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:33.198 [2024-12-11 13:20:24.567876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.637 ms 00:22:33.198 [2024-12-11 13:20:24.567886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.598368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.598410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:33.198 [2024-12-11 13:20:24.598425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.421 ms 00:22:33.198 [2024-12-11 13:20:24.598438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.617336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.617492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:33.198 [2024-12-11 13:20:24.617514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:22:33.198 [2024-12-11 13:20:24.617525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.635723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.635887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:33.198 [2024-12-11 13:20:24.635908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.095 ms 00:22:33.198 [2024-12-11 13:20:24.635918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.636688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.636712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:33.198 [2024-12-11 13:20:24.636726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:22:33.198 [2024-12-11 13:20:24.636737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.731109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.198 [2024-12-11 13:20:24.731199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:33.198 [2024-12-11 13:20:24.731235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.492 ms 00:22:33.198 [2024-12-11 13:20:24.731246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.198 [2024-12-11 13:20:24.742922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:33.458 [2024-12-11 13:20:24.768179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.768264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:33.458 [2024-12-11 13:20:24.768283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.828 ms 00:22:33.458 [2024-12-11 13:20:24.768305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.768460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.768476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:33.458 [2024-12-11 13:20:24.768488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:33.458 [2024-12-11 13:20:24.768499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.768573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.768586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:33.458 [2024-12-11 13:20:24.768597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:33.458 [2024-12-11 13:20:24.768615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.768660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.768676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:33.458 [2024-12-11 13:20:24.768687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:33.458 [2024-12-11 13:20:24.768698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.768746] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:33.458 [2024-12-11 13:20:24.768760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.768771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:33.458 [2024-12-11 13:20:24.768782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:33.458 [2024-12-11 13:20:24.768793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.805950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.805996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:33.458 [2024-12-11 13:20:24.806012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.192 ms 00:22:33.458 [2024-12-11 13:20:24.806023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.806164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.458 [2024-12-11 13:20:24.806180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:33.458 [2024-12-11 13:20:24.806193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:33.458 [2024-12-11 13:20:24.806204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.458 [2024-12-11 13:20:24.807495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:33.458 [2024-12-11 13:20:24.812141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 443.389 ms, result 0 00:22:33.458 [2024-12-11 13:20:24.812980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:33.458 [2024-12-11 13:20:24.832086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:34.395  [2024-12-11T13:20:26.900Z] Copying: 27/256 [MB] (27 MBps) [2024-12-11T13:20:27.837Z] Copying: 52/256 [MB] (25 MBps) [2024-12-11T13:20:29.215Z] Copying: 77/256 [MB] (24 MBps) [2024-12-11T13:20:30.153Z] Copying: 102/256 [MB] (24 MBps) [2024-12-11T13:20:31.089Z] Copying: 126/256 [MB] (23 MBps) [2024-12-11T13:20:32.026Z] Copying: 149/256 [MB] (23 MBps) [2024-12-11T13:20:32.963Z] Copying: 174/256 [MB] (24 MBps) [2024-12-11T13:20:33.901Z] Copying: 198/256 [MB] (24 MBps) [2024-12-11T13:20:34.873Z] Copying: 222/256 [MB] (24 MBps) [2024-12-11T13:20:35.443Z] Copying: 247/256 [MB] (24 MBps) [2024-12-11T13:20:35.443Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-11 13:20:35.184332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:43.875 [2024-12-11 13:20:35.200314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.200515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:43.875 [2024-12-11 13:20:35.200661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:43.875 [2024-12-11 13:20:35.200700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.200757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:43.875 [2024-12-11 13:20:35.205681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.205841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:43.875 [2024-12-11 13:20:35.205924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:22:43.875 [2024-12-11 13:20:35.205960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.206246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.206290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:43.875 [2024-12-11 13:20:35.206323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:22:43.875 [2024-12-11 13:20:35.206404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.209331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.209440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:43.875 [2024-12-11 13:20:35.209459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:22:43.875 [2024-12-11 13:20:35.209471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.215107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.215146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:43.875 [2024-12-11 13:20:35.215159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.619 ms 00:22:43.875 [2024-12-11 13:20:35.215169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.252256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.252328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:43.875 [2024-12-11 13:20:35.252345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.062 ms 00:22:43.875 [2024-12-11 13:20:35.252355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.275030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.275082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:43.875 [2024-12-11 13:20:35.275123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.650 ms 00:22:43.875 [2024-12-11 13:20:35.275151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.875 [2024-12-11 13:20:35.275316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.875 [2024-12-11 13:20:35.275330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:43.875 [2024-12-11 13:20:35.275356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:43.875 [2024-12-11 13:20:35.275367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.876 [2024-12-11 13:20:35.312270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.876 [2024-12-11 13:20:35.312326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:43.876 [2024-12-11 13:20:35.312343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.942 ms 00:22:43.876 [2024-12-11 13:20:35.312370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.876 [2024-12-11 13:20:35.348144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.876 [2024-12-11 13:20:35.348347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:43.876 [2024-12-11 13:20:35.348371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.757 ms 00:22:43.876 [2024-12-11 13:20:35.348382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.876 [2024-12-11 13:20:35.383973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.876 [2024-12-11 13:20:35.384018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:43.876 [2024-12-11 13:20:35.384033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.544 ms 00:22:43.876 [2024-12-11 13:20:35.384060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.876 [2024-12-11 13:20:35.419603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.876 [2024-12-11 13:20:35.419647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:43.876 [2024-12-11 13:20:35.419661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.497 ms 00:22:43.876 [2024-12-11 13:20:35.419688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.876 [2024-12-11 13:20:35.419749] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:43.876 [2024-12-11 13:20:35.419769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.419999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:43.876 [2024-12-11 13:20:35.420661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:43.877 [2024-12-11 13:20:35.420936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:43.877 [2024-12-11 13:20:35.420946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:22:43.877 [2024-12-11 13:20:35.420962] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:43.877 [2024-12-11 13:20:35.420972] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:43.877 [2024-12-11 13:20:35.420983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:43.877 [2024-12-11 13:20:35.420995] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:43.877 [2024-12-11 13:20:35.421005] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:43.877 [2024-12-11 13:20:35.421016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:43.877 [2024-12-11 13:20:35.421032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:43.877 [2024-12-11 13:20:35.421042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:43.877 [2024-12-11 13:20:35.421052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:43.877 [2024-12-11 13:20:35.421062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.877 [2024-12-11 13:20:35.421073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:43.877 [2024-12-11 13:20:35.421085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:22:43.877 [2024-12-11 13:20:35.421095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.442228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.137 [2024-12-11 13:20:35.442270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:44.137 [2024-12-11 13:20:35.442284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.132 ms 00:22:44.137 [2024-12-11 13:20:35.442296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.442949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.137 [2024-12-11 13:20:35.442966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:44.137 [2024-12-11 13:20:35.442978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:22:44.137 [2024-12-11 13:20:35.442989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.501217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.137 [2024-12-11 13:20:35.501458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.137 [2024-12-11 13:20:35.501484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.137 [2024-12-11 13:20:35.501510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.501637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.137 [2024-12-11 13:20:35.501651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.137 [2024-12-11 13:20:35.501663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.137 [2024-12-11 13:20:35.501674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.501736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.137 [2024-12-11 13:20:35.501750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.137 [2024-12-11 13:20:35.501762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.137 [2024-12-11 13:20:35.501773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.501803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.137 [2024-12-11 13:20:35.501815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.137 [2024-12-11 13:20:35.501827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.137 [2024-12-11 13:20:35.501838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.137 [2024-12-11 13:20:35.637250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.137 [2024-12-11 13:20:35.637464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.137 [2024-12-11 13:20:35.637493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.137 [2024-12-11 13:20:35.637505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.747906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.747974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.397 [2024-12-11 13:20:35.747991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.748196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.397 [2024-12-11 13:20:35.748208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.748277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.397 [2024-12-11 13:20:35.748288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.748447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.397 [2024-12-11 13:20:35.748459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.748524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:44.397 [2024-12-11 13:20:35.748544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.748614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.397 [2024-12-11 13:20:35.748625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.397 [2024-12-11 13:20:35.748708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.397 [2024-12-11 13:20:35.748720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.397 [2024-12-11 13:20:35.748732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.397 [2024-12-11 13:20:35.748911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.482 ms, result 0 00:22:45.335 00:22:45.335 00:22:45.595 13:20:36 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:45.595 13:20:36 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:45.854 13:20:37 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:46.113 [2024-12-11 13:20:37.495143] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:22:46.113 [2024-12-11 13:20:37.495291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80154 ] 00:22:46.372 [2024-12-11 13:20:37.679281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.372 [2024-12-11 13:20:37.821053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.946 [2024-12-11 13:20:38.251513] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.946 [2024-12-11 13:20:38.251598] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.946 [2024-12-11 13:20:38.418553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.946 [2024-12-11 13:20:38.418632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:46.946 [2024-12-11 13:20:38.418652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:46.946 [2024-12-11 13:20:38.418664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.946 [2024-12-11 13:20:38.422086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.946 [2024-12-11 13:20:38.422143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.946 [2024-12-11 13:20:38.422159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.405 ms 00:22:46.946 [2024-12-11 13:20:38.422170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.946 [2024-12-11 13:20:38.422277] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:46.946 [2024-12-11 13:20:38.423363] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:46.946 [2024-12-11 13:20:38.423397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.946 [2024-12-11 13:20:38.423409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.946 [2024-12-11 13:20:38.423421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:22:46.946 [2024-12-11 13:20:38.423432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.946 [2024-12-11 13:20:38.425894] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:46.946 [2024-12-11 13:20:38.446345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.946 [2024-12-11 13:20:38.446386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:46.946 [2024-12-11 13:20:38.446403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.483 ms 00:22:46.946 [2024-12-11 13:20:38.446414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.946 [2024-12-11 13:20:38.446533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.946 [2024-12-11 13:20:38.446549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:46.946 [2024-12-11 13:20:38.446561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:46.947 [2024-12-11 13:20:38.446572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.458477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.458513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.947 [2024-12-11 13:20:38.458527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.877 ms 00:22:46.947 [2024-12-11 13:20:38.458537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.458682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.458698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.947 [2024-12-11 13:20:38.458710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:46.947 [2024-12-11 13:20:38.458722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.458761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.458773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:46.947 [2024-12-11 13:20:38.458785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:46.947 [2024-12-11 13:20:38.458796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.458824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:46.947 [2024-12-11 13:20:38.464783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.464934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.947 [2024-12-11 13:20:38.464955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:22:46.947 [2024-12-11 13:20:38.464966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.465037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.465050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:46.947 [2024-12-11 13:20:38.465062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:46.947 [2024-12-11 13:20:38.465072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.465100] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:46.947 [2024-12-11 13:20:38.465144] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:46.947 [2024-12-11 13:20:38.465185] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:46.947 [2024-12-11 13:20:38.465204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:46.947 [2024-12-11 13:20:38.465298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:46.947 [2024-12-11 13:20:38.465313] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:46.947 [2024-12-11 13:20:38.465327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:46.947 [2024-12-11 13:20:38.465345] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465358] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465370] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:46.947 [2024-12-11 13:20:38.465381] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:46.947 [2024-12-11 13:20:38.465392] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:46.947 [2024-12-11 13:20:38.465403] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:46.947 [2024-12-11 13:20:38.465415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.465425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:46.947 [2024-12-11 13:20:38.465436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:46.947 [2024-12-11 13:20:38.465446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.465527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.947 [2024-12-11 13:20:38.465543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:46.947 [2024-12-11 13:20:38.465561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:46.947 [2024-12-11 13:20:38.465571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.947 [2024-12-11 13:20:38.465665] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:46.947 [2024-12-11 13:20:38.465679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:46.947 [2024-12-11 13:20:38.465690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:46.947 [2024-12-11 13:20:38.465723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:46.947 [2024-12-11 13:20:38.465754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.947 [2024-12-11 13:20:38.465774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:46.947 [2024-12-11 13:20:38.465796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:46.947 [2024-12-11 13:20:38.465806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.947 [2024-12-11 13:20:38.465816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:46.947 [2024-12-11 13:20:38.465825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:46.947 [2024-12-11 13:20:38.465836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:46.947 [2024-12-11 13:20:38.465856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:46.947 [2024-12-11 13:20:38.465885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:46.947 [2024-12-11 13:20:38.465914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:46.947 [2024-12-11 13:20:38.465923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.947 [2024-12-11 13:20:38.465933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:46.947 [2024-12-11 13:20:38.465942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:46.948 [2024-12-11 13:20:38.465952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.948 [2024-12-11 13:20:38.465961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:46.948 [2024-12-11 13:20:38.465970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:46.948 [2024-12-11 13:20:38.465979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.948 [2024-12-11 13:20:38.465988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:46.948 [2024-12-11 13:20:38.465997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:46.948 [2024-12-11 13:20:38.466006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.948 [2024-12-11 13:20:38.466015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:46.948 [2024-12-11 13:20:38.466025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:46.948 [2024-12-11 13:20:38.466034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.948 [2024-12-11 13:20:38.466044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:46.948 [2024-12-11 13:20:38.466053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:46.948 [2024-12-11 13:20:38.466062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.948 [2024-12-11 13:20:38.466071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:46.948 [2024-12-11 13:20:38.466080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:46.948 [2024-12-11 13:20:38.466092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.948 [2024-12-11 13:20:38.466103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:46.948 [2024-12-11 13:20:38.466123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:46.948 [2024-12-11 13:20:38.466139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.948 [2024-12-11 13:20:38.466149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.948 [2024-12-11 13:20:38.466160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:46.948 [2024-12-11 13:20:38.466170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:46.948 [2024-12-11 13:20:38.466179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:46.948 [2024-12-11 13:20:38.466188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:46.948 [2024-12-11 13:20:38.466198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:46.948 [2024-12-11 13:20:38.466208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:46.948 [2024-12-11 13:20:38.466219] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:46.948 [2024-12-11 13:20:38.466233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:46.948 [2024-12-11 13:20:38.466254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:46.948 [2024-12-11 13:20:38.466264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:46.948 [2024-12-11 13:20:38.466275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:46.948 [2024-12-11 13:20:38.466285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:46.948 [2024-12-11 13:20:38.466295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:46.948 [2024-12-11 13:20:38.466306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:46.948 [2024-12-11 13:20:38.466317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:46.948 [2024-12-11 13:20:38.466328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:46.948 [2024-12-11 13:20:38.466338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:46.948 [2024-12-11 13:20:38.466390] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:46.948 [2024-12-11 13:20:38.466402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:46.948 [2024-12-11 13:20:38.466424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:46.948 [2024-12-11 13:20:38.466434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:46.948 [2024-12-11 13:20:38.466446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:46.948 [2024-12-11 13:20:38.466457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.948 [2024-12-11 13:20:38.466473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:46.948 [2024-12-11 13:20:38.466484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:22:46.948 [2024-12-11 13:20:38.466494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.513025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.513093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.210 [2024-12-11 13:20:38.513123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.537 ms 00:22:47.210 [2024-12-11 13:20:38.513136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.513357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.513372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:47.210 [2024-12-11 13:20:38.513385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:47.210 [2024-12-11 13:20:38.513395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.578284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.578376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.210 [2024-12-11 13:20:38.578395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.962 ms 00:22:47.210 [2024-12-11 13:20:38.578407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.578541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.578555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.210 [2024-12-11 13:20:38.578568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.210 [2024-12-11 13:20:38.578579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.579611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.579729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.210 [2024-12-11 13:20:38.579809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:22:47.210 [2024-12-11 13:20:38.579852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.580036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.580086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.210 [2024-12-11 13:20:38.580183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:47.210 [2024-12-11 13:20:38.580215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.603571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.603788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.210 [2024-12-11 13:20:38.603925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.343 ms 00:22:47.210 [2024-12-11 13:20:38.603965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.624238] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:47.210 [2024-12-11 13:20:38.624279] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:47.210 [2024-12-11 13:20:38.624296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.624309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:47.210 [2024-12-11 13:20:38.624322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.187 ms 00:22:47.210 [2024-12-11 13:20:38.624332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.654563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.654757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:47.210 [2024-12-11 13:20:38.654782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.168 ms 00:22:47.210 [2024-12-11 13:20:38.654794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.673723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.673873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:47.210 [2024-12-11 13:20:38.673895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.823 ms 00:22:47.210 [2024-12-11 13:20:38.673907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.691877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.691916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:47.210 [2024-12-11 13:20:38.691931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.916 ms 00:22:47.210 [2024-12-11 13:20:38.691941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.210 [2024-12-11 13:20:38.692790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.210 [2024-12-11 13:20:38.692822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:47.211 [2024-12-11 13:20:38.692837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:22:47.211 [2024-12-11 13:20:38.692847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.789908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.790261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:47.470 [2024-12-11 13:20:38.790293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.183 ms 00:22:47.470 [2024-12-11 13:20:38.790305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.802896] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:47.470 [2024-12-11 13:20:38.829081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.829177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:47.470 [2024-12-11 13:20:38.829213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.675 ms 00:22:47.470 [2024-12-11 13:20:38.829233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.829430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.829446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:47.470 [2024-12-11 13:20:38.829459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:47.470 [2024-12-11 13:20:38.829470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.829553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.829567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:47.470 [2024-12-11 13:20:38.829578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:47.470 [2024-12-11 13:20:38.829596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.829642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.829657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:47.470 [2024-12-11 13:20:38.829668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:47.470 [2024-12-11 13:20:38.829680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.829726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:47.470 [2024-12-11 13:20:38.829739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.829750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:47.470 [2024-12-11 13:20:38.829760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:47.470 [2024-12-11 13:20:38.829772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.470 [2024-12-11 13:20:38.868861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.470 [2024-12-11 13:20:38.869133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:47.470 [2024-12-11 13:20:38.869161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.126 ms 00:22:47.471 [2024-12-11 13:20:38.869173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.471 [2024-12-11 13:20:38.869380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.471 [2024-12-11 13:20:38.869395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:47.471 [2024-12-11 13:20:38.869408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:47.471 [2024-12-11 13:20:38.869419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.471 [2024-12-11 13:20:38.870769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.471 [2024-12-11 13:20:38.875596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.609 ms, result 0 00:22:47.471 [2024-12-11 13:20:38.876603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.471 [2024-12-11 13:20:38.895589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.730  [2024-12-11T13:20:39.298Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-12-11 13:20:39.066873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.730 [2024-12-11 13:20:39.082497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.730 [2024-12-11 13:20:39.082556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:47.730 [2024-12-11 13:20:39.082582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.730 [2024-12-11 13:20:39.082594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.730 [2024-12-11 13:20:39.082624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:47.730 [2024-12-11 13:20:39.087489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.730 [2024-12-11 13:20:39.087655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:47.730 [2024-12-11 13:20:39.087683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:22:47.730 [2024-12-11 13:20:39.087695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.730 [2024-12-11 13:20:39.095504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.730 [2024-12-11 13:20:39.095555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:47.730 [2024-12-11 13:20:39.095573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.777 ms 00:22:47.730 [2024-12-11 13:20:39.095586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.730 [2024-12-11 13:20:39.098977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.730 [2024-12-11 13:20:39.099151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:47.730 [2024-12-11 13:20:39.099181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:22:47.730 [2024-12-11 13:20:39.099196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.730 [2024-12-11 13:20:39.105559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.730 [2024-12-11 13:20:39.105602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:47.730 [2024-12-11 13:20:39.105618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:22:47.730 [2024-12-11 13:20:39.105630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.730 [2024-12-11 13:20:39.148956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.730 [2024-12-11 13:20:39.149260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:47.731 [2024-12-11 13:20:39.149289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.304 ms 00:22:47.731 [2024-12-11 13:20:39.149301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.731 [2024-12-11 13:20:39.173072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.731 [2024-12-11 13:20:39.173162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:47.731 [2024-12-11 13:20:39.173183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.652 ms 00:22:47.731 [2024-12-11 13:20:39.173195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.731 [2024-12-11 13:20:39.173411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.731 [2024-12-11 13:20:39.173427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:47.731 [2024-12-11 13:20:39.173455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:47.731 [2024-12-11 13:20:39.173467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.731 [2024-12-11 13:20:39.211504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.731 [2024-12-11 13:20:39.211565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:47.731 [2024-12-11 13:20:39.211584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.059 ms 00:22:47.731 [2024-12-11 13:20:39.211612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.731 [2024-12-11 13:20:39.248292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.731 [2024-12-11 13:20:39.248352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:47.731 [2024-12-11 13:20:39.248370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.655 ms 00:22:47.731 [2024-12-11 13:20:39.248397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.731 [2024-12-11 13:20:39.284654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.731 [2024-12-11 13:20:39.284832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:47.731 [2024-12-11 13:20:39.284857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.237 ms 00:22:47.731 [2024-12-11 13:20:39.284869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.991 [2024-12-11 13:20:39.322417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.991 [2024-12-11 13:20:39.322483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:47.991 [2024-12-11 13:20:39.322501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:22:47.991 [2024-12-11 13:20:39.322512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.991 [2024-12-11 13:20:39.322592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:47.991 [2024-12-11 13:20:39.322614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.322997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:47.991 [2024-12-11 13:20:39.323156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:47.992 [2024-12-11 13:20:39.323811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:47.992 [2024-12-11 13:20:39.323822] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:22:47.992 [2024-12-11 13:20:39.323834] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:47.992 [2024-12-11 13:20:39.323844] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:47.992 [2024-12-11 13:20:39.323854] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:47.992 [2024-12-11 13:20:39.323866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:47.992 [2024-12-11 13:20:39.323876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:47.992 [2024-12-11 13:20:39.323887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:47.992 [2024-12-11 13:20:39.323904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:47.992 [2024-12-11 13:20:39.323913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:47.992 [2024-12-11 13:20:39.323922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:47.992 [2024-12-11 13:20:39.323933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.992 [2024-12-11 13:20:39.323944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:47.992 [2024-12-11 13:20:39.323956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:22:47.992 [2024-12-11 13:20:39.323966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.346181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.992 [2024-12-11 13:20:39.346356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:47.992 [2024-12-11 13:20:39.346381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.222 ms 00:22:47.992 [2024-12-11 13:20:39.346392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.347059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.992 [2024-12-11 13:20:39.347076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:47.992 [2024-12-11 13:20:39.347089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:22:47.992 [2024-12-11 13:20:39.347099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.405837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.992 [2024-12-11 13:20:39.406070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.992 [2024-12-11 13:20:39.406096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.992 [2024-12-11 13:20:39.406132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.406262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.992 [2024-12-11 13:20:39.406275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.992 [2024-12-11 13:20:39.406286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.992 [2024-12-11 13:20:39.406297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.406362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.992 [2024-12-11 13:20:39.406376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.992 [2024-12-11 13:20:39.406388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.992 [2024-12-11 13:20:39.406399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.406427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.992 [2024-12-11 13:20:39.406438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.992 [2024-12-11 13:20:39.406449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.992 [2024-12-11 13:20:39.406460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.992 [2024-12-11 13:20:39.541000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.992 [2024-12-11 13:20:39.541095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.992 [2024-12-11 13:20:39.541125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.993 [2024-12-11 13:20:39.541139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.649957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.252 [2024-12-11 13:20:39.650325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.650338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.650466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.252 [2024-12-11 13:20:39.650491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.650503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.650536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.252 [2024-12-11 13:20:39.650568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.650579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.650722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.252 [2024-12-11 13:20:39.650750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.650761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.650801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:48.252 [2024-12-11 13:20:39.650830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.650842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.650892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.252 [2024-12-11 13:20:39.650915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.650926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.650979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.252 [2024-12-11 13:20:39.650996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.252 [2024-12-11 13:20:39.651007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.252 [2024-12-11 13:20:39.651018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.252 [2024-12-11 13:20:39.651202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.633 ms, result 0 00:22:49.632 00:22:49.632 00:22:49.632 13:20:40 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80190 00:22:49.632 13:20:40 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:49.632 13:20:40 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80190 00:22:49.632 13:20:40 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 80190 ']' 00:22:49.632 13:20:40 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.632 13:20:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.632 13:20:40 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.632 13:20:40 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.632 13:20:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:49.632 [2024-12-11 13:20:40.957010] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:22:49.632 [2024-12-11 13:20:40.957182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80190 ] 00:22:49.632 [2024-12-11 13:20:41.140473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.891 [2024-12-11 13:20:41.280407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.829 13:20:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.829 13:20:42 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:50.829 13:20:42 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:51.088 [2024-12-11 13:20:42.513764] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:51.088 [2024-12-11 13:20:42.514073] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:51.349 [2024-12-11 13:20:42.701692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.701760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:51.349 [2024-12-11 13:20:42.701797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:51.349 [2024-12-11 13:20:42.701808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.706036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.706076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:51.349 [2024-12-11 13:20:42.706092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.211 ms 00:22:51.349 [2024-12-11 13:20:42.706102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.706224] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:51.349 [2024-12-11 13:20:42.707277] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:51.349 [2024-12-11 13:20:42.707312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.707324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:51.349 [2024-12-11 13:20:42.707337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:22:51.349 [2024-12-11 13:20:42.707347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.710004] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:51.349 [2024-12-11 13:20:42.729739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.729788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:51.349 [2024-12-11 13:20:42.729804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.771 ms 00:22:51.349 [2024-12-11 13:20:42.729821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.729939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.729959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:51.349 [2024-12-11 13:20:42.729972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:51.349 [2024-12-11 13:20:42.729988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.742468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.742521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:51.349 [2024-12-11 13:20:42.742536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:22:51.349 [2024-12-11 13:20:42.742553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.742733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.742756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:51.349 [2024-12-11 13:20:42.742769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:22:51.349 [2024-12-11 13:20:42.742794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.742828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.742845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:51.349 [2024-12-11 13:20:42.742857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:51.349 [2024-12-11 13:20:42.742873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.742904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:51.349 [2024-12-11 13:20:42.748845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.748877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:51.349 [2024-12-11 13:20:42.748895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:22:51.349 [2024-12-11 13:20:42.748906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.748978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.748991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:51.349 [2024-12-11 13:20:42.749008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:51.349 [2024-12-11 13:20:42.749024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.749057] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:51.349 [2024-12-11 13:20:42.749087] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:51.349 [2024-12-11 13:20:42.749161] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:51.349 [2024-12-11 13:20:42.749184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:51.349 [2024-12-11 13:20:42.749283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:51.349 [2024-12-11 13:20:42.749303] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:51.349 [2024-12-11 13:20:42.749329] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:51.349 [2024-12-11 13:20:42.749344] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:51.349 [2024-12-11 13:20:42.749363] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:51.349 [2024-12-11 13:20:42.749375] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:51.349 [2024-12-11 13:20:42.749391] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:51.349 [2024-12-11 13:20:42.749402] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:51.349 [2024-12-11 13:20:42.749423] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:51.349 [2024-12-11 13:20:42.749435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.749451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:51.349 [2024-12-11 13:20:42.749462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:22:51.349 [2024-12-11 13:20:42.749478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.749571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.349 [2024-12-11 13:20:42.749587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:51.349 [2024-12-11 13:20:42.749598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:51.349 [2024-12-11 13:20:42.749615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.349 [2024-12-11 13:20:42.749710] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:51.349 [2024-12-11 13:20:42.749729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:51.349 [2024-12-11 13:20:42.749741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:51.349 [2024-12-11 13:20:42.749757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.349 [2024-12-11 13:20:42.749769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:51.349 [2024-12-11 13:20:42.749785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:51.349 [2024-12-11 13:20:42.749795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:51.349 [2024-12-11 13:20:42.749816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:51.349 [2024-12-11 13:20:42.749826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:51.349 [2024-12-11 13:20:42.749842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:51.349 [2024-12-11 13:20:42.749851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:51.349 [2024-12-11 13:20:42.749866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:51.349 [2024-12-11 13:20:42.749875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:51.349 [2024-12-11 13:20:42.749892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:51.349 [2024-12-11 13:20:42.749902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:51.350 [2024-12-11 13:20:42.749917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.350 [2024-12-11 13:20:42.749927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:51.350 [2024-12-11 13:20:42.749943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:51.350 [2024-12-11 13:20:42.749965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.350 [2024-12-11 13:20:42.749980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:51.350 [2024-12-11 13:20:42.749990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.350 [2024-12-11 13:20:42.750015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:51.350 [2024-12-11 13:20:42.750036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.350 [2024-12-11 13:20:42.750062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:51.350 [2024-12-11 13:20:42.750071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.350 [2024-12-11 13:20:42.750095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:51.350 [2024-12-11 13:20:42.750122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.350 [2024-12-11 13:20:42.750148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:51.350 [2024-12-11 13:20:42.750158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:51.350 [2024-12-11 13:20:42.750183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:51.350 [2024-12-11 13:20:42.750198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:51.350 [2024-12-11 13:20:42.750207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:51.350 [2024-12-11 13:20:42.750222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:51.350 [2024-12-11 13:20:42.750232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:51.350 [2024-12-11 13:20:42.750253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:51.350 [2024-12-11 13:20:42.750277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:51.350 [2024-12-11 13:20:42.750287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750302] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:51.350 [2024-12-11 13:20:42.750319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:51.350 [2024-12-11 13:20:42.750335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:51.350 [2024-12-11 13:20:42.750346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.350 [2024-12-11 13:20:42.750361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:51.350 [2024-12-11 13:20:42.750372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:51.350 [2024-12-11 13:20:42.750387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:51.350 [2024-12-11 13:20:42.750396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:51.350 [2024-12-11 13:20:42.750411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:51.350 [2024-12-11 13:20:42.750420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:51.350 [2024-12-11 13:20:42.750437] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:51.350 [2024-12-11 13:20:42.750450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:51.350 [2024-12-11 13:20:42.750485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:51.350 [2024-12-11 13:20:42.750501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:51.350 [2024-12-11 13:20:42.750512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:51.350 [2024-12-11 13:20:42.750528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:51.350 [2024-12-11 13:20:42.750539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:51.350 [2024-12-11 13:20:42.750554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:51.350 [2024-12-11 13:20:42.750565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:51.350 [2024-12-11 13:20:42.750580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:51.350 [2024-12-11 13:20:42.750591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:51.350 [2024-12-11 13:20:42.750660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:51.350 [2024-12-11 13:20:42.750672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:51.350 [2024-12-11 13:20:42.750705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:51.350 [2024-12-11 13:20:42.750721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:51.350 [2024-12-11 13:20:42.750732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:51.350 [2024-12-11 13:20:42.750749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.750760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:51.350 [2024-12-11 13:20:42.750777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:22:51.350 [2024-12-11 13:20:42.750793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.804039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.804094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:51.350 [2024-12-11 13:20:42.804144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.250 ms 00:22:51.350 [2024-12-11 13:20:42.804164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.804371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.804385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:51.350 [2024-12-11 13:20:42.804401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:51.350 [2024-12-11 13:20:42.804411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.860431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.860487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:51.350 [2024-12-11 13:20:42.860510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.069 ms 00:22:51.350 [2024-12-11 13:20:42.860522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.860650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.860663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:51.350 [2024-12-11 13:20:42.860680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:51.350 [2024-12-11 13:20:42.860691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.861494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.861510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:51.350 [2024-12-11 13:20:42.861533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:22:51.350 [2024-12-11 13:20:42.861544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.861703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.861716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:51.350 [2024-12-11 13:20:42.861733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:51.350 [2024-12-11 13:20:42.861744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.350 [2024-12-11 13:20:42.888770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.350 [2024-12-11 13:20:42.888822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:51.350 [2024-12-11 13:20:42.888844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.032 ms 00:22:51.350 [2024-12-11 13:20:42.888856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:42.921684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:51.611 [2024-12-11 13:20:42.921730] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:51.611 [2024-12-11 13:20:42.921753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:42.921765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:51.611 [2024-12-11 13:20:42.921783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.782 ms 00:22:51.611 [2024-12-11 13:20:42.921808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:42.951763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:42.951827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:51.611 [2024-12-11 13:20:42.951867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.900 ms 00:22:51.611 [2024-12-11 13:20:42.951879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:42.970170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:42.970210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:51.611 [2024-12-11 13:20:42.970232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.222 ms 00:22:51.611 [2024-12-11 13:20:42.970258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:42.988260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:42.988294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:51.611 [2024-12-11 13:20:42.988311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.946 ms 00:22:51.611 [2024-12-11 13:20:42.988337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:42.989131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:42.989173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:51.611 [2024-12-11 13:20:42.989192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:22:51.611 [2024-12-11 13:20:42.989204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.085494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.085586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:51.611 [2024-12-11 13:20:43.085624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.410 ms 00:22:51.611 [2024-12-11 13:20:43.085637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.096623] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:51.611 [2024-12-11 13:20:43.122447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.122547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:51.611 [2024-12-11 13:20:43.122572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.739 ms 00:22:51.611 [2024-12-11 13:20:43.122589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.122754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.122774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:51.611 [2024-12-11 13:20:43.122787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:51.611 [2024-12-11 13:20:43.122803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.122876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.122894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:51.611 [2024-12-11 13:20:43.122905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:51.611 [2024-12-11 13:20:43.122928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.122958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.122975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:51.611 [2024-12-11 13:20:43.122987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:51.611 [2024-12-11 13:20:43.123002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.123053] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:51.611 [2024-12-11 13:20:43.123080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.123098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:51.611 [2024-12-11 13:20:43.123136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:51.611 [2024-12-11 13:20:43.123148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.160028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.160094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:51.611 [2024-12-11 13:20:43.160135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.880 ms 00:22:51.611 [2024-12-11 13:20:43.160147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.160291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.611 [2024-12-11 13:20:43.160305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:51.611 [2024-12-11 13:20:43.160323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:51.611 [2024-12-11 13:20:43.160340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.611 [2024-12-11 13:20:43.161632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:51.611 [2024-12-11 13:20:43.166351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 460.326 ms, result 0 00:22:51.611 [2024-12-11 13:20:43.167558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:51.871 Some configs were skipped because the RPC state that can call them passed over. 00:22:51.871 13:20:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:51.871 [2024-12-11 13:20:43.415915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.871 [2024-12-11 13:20:43.416205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:51.871 [2024-12-11 13:20:43.416356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:22:51.871 [2024-12-11 13:20:43.416437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.871 [2024-12-11 13:20:43.416526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.414 ms, result 0 00:22:51.871 true 00:22:51.871 13:20:43 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:52.130 [2024-12-11 13:20:43.627485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.130 [2024-12-11 13:20:43.627750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:52.130 [2024-12-11 13:20:43.627896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:22:52.130 [2024-12-11 13:20:43.627938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.130 [2024-12-11 13:20:43.628033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.024 ms, result 0 00:22:52.130 true 00:22:52.130 13:20:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80190 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 80190 ']' 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 80190 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80190 00:22:52.130 killing process with pid 80190 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80190' 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 80190 00:22:52.130 13:20:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 80190 00:22:53.510 [2024-12-11 13:20:44.894750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.894837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.510 [2024-12-11 13:20:44.894855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.510 [2024-12-11 13:20:44.894868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.894897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:53.510 [2024-12-11 13:20:44.899810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.899940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.510 [2024-12-11 13:20:44.900032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:22:53.510 [2024-12-11 13:20:44.900069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.900408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.900454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.510 [2024-12-11 13:20:44.900488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:22:53.510 [2024-12-11 13:20:44.900583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.904055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.904199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.510 [2024-12-11 13:20:44.904293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.420 ms 00:22:53.510 [2024-12-11 13:20:44.904330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.910082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.910219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:53.510 [2024-12-11 13:20:44.910307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.613 ms 00:22:53.510 [2024-12-11 13:20:44.910343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.926740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.927029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.510 [2024-12-11 13:20:44.927378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.318 ms 00:22:53.510 [2024-12-11 13:20:44.927431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.938831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.939007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.510 [2024-12-11 13:20:44.939140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.976 ms 00:22:53.510 [2024-12-11 13:20:44.939179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.939347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.939481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.510 [2024-12-11 13:20:44.939565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:53.510 [2024-12-11 13:20:44.939597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.954925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.955082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:53.510 [2024-12-11 13:20:44.955215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.306 ms 00:22:53.510 [2024-12-11 13:20:44.955254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.969314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.969457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:53.510 [2024-12-11 13:20:44.969602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.986 ms 00:22:53.510 [2024-12-11 13:20:44.969641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.983774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.983919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.510 [2024-12-11 13:20:44.984067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.073 ms 00:22:53.510 [2024-12-11 13:20:44.984104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.998192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.510 [2024-12-11 13:20:44.998324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.510 [2024-12-11 13:20:44.998409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.985 ms 00:22:53.510 [2024-12-11 13:20:44.998445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.510 [2024-12-11 13:20:44.998520] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.510 [2024-12-11 13:20:44.998565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.998789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.998840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.998894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.998943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.510 [2024-12-11 13:20:44.999355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:44.999997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.511 [2024-12-11 13:20:45.000507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.511 [2024-12-11 13:20:45.000528] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:22:53.511 [2024-12-11 13:20:45.000545] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:53.511 [2024-12-11 13:20:45.000558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:53.511 [2024-12-11 13:20:45.000568] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:53.511 [2024-12-11 13:20:45.000583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:53.511 [2024-12-11 13:20:45.000593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.511 [2024-12-11 13:20:45.000608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.511 [2024-12-11 13:20:45.000617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.512 [2024-12-11 13:20:45.000629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.512 [2024-12-11 13:20:45.000638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.512 [2024-12-11 13:20:45.000652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.512 [2024-12-11 13:20:45.000662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.512 [2024-12-11 13:20:45.000677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:22:53.512 [2024-12-11 13:20:45.000688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.512 [2024-12-11 13:20:45.021234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.512 [2024-12-11 13:20:45.021400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.512 [2024-12-11 13:20:45.021428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.545 ms 00:22:53.512 [2024-12-11 13:20:45.021439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.512 [2024-12-11 13:20:45.022082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.512 [2024-12-11 13:20:45.022098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.512 [2024-12-11 13:20:45.022134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:22:53.512 [2024-12-11 13:20:45.022145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.771 [2024-12-11 13:20:45.094318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.771 [2024-12-11 13:20:45.094385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.771 [2024-12-11 13:20:45.094405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.771 [2024-12-11 13:20:45.094417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.771 [2024-12-11 13:20:45.094580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.771 [2024-12-11 13:20:45.094594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.771 [2024-12-11 13:20:45.094614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.771 [2024-12-11 13:20:45.094625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.771 [2024-12-11 13:20:45.094700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.771 [2024-12-11 13:20:45.094714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.771 [2024-12-11 13:20:45.094732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.771 [2024-12-11 13:20:45.094743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.771 [2024-12-11 13:20:45.094767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.771 [2024-12-11 13:20:45.094779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.771 [2024-12-11 13:20:45.094793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.771 [2024-12-11 13:20:45.094806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.771 [2024-12-11 13:20:45.231654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.771 [2024-12-11 13:20:45.231719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.771 [2024-12-11 13:20:45.231741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.771 [2024-12-11 13:20:45.231752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.030 [2024-12-11 13:20:45.340207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.030 [2024-12-11 13:20:45.340398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.030 [2024-12-11 13:20:45.340480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.030 [2024-12-11 13:20:45.340684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.030 [2024-12-11 13:20:45.340773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.030 [2024-12-11 13:20:45.340878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.340949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.030 [2024-12-11 13:20:45.340961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.030 [2024-12-11 13:20:45.340978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.030 [2024-12-11 13:20:45.340989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.030 [2024-12-11 13:20:45.341182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 447.096 ms, result 0 00:22:54.968 13:20:46 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.227 [2024-12-11 13:20:46.553658] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:22:55.227 [2024-12-11 13:20:46.553799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80266 ] 00:22:55.227 [2024-12-11 13:20:46.739564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.485 [2024-12-11 13:20:46.877630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.744 [2024-12-11 13:20:47.290391] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.744 [2024-12-11 13:20:47.290470] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:56.003 [2024-12-11 13:20:47.456937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.003 [2024-12-11 13:20:47.457003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:56.003 [2024-12-11 13:20:47.457019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:56.003 [2024-12-11 13:20:47.457046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.003 [2024-12-11 13:20:47.460413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.003 [2024-12-11 13:20:47.460612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.003 [2024-12-11 13:20:47.460637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.351 ms 00:22:56.003 [2024-12-11 13:20:47.460649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.003 [2024-12-11 13:20:47.460794] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:56.003 [2024-12-11 13:20:47.461802] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:56.003 [2024-12-11 13:20:47.461837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.003 [2024-12-11 13:20:47.461850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.003 [2024-12-11 13:20:47.461862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:22:56.003 [2024-12-11 13:20:47.461873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.003 [2024-12-11 13:20:47.464460] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:56.003 [2024-12-11 13:20:47.484478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.003 [2024-12-11 13:20:47.484513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:56.003 [2024-12-11 13:20:47.484527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.051 ms 00:22:56.004 [2024-12-11 13:20:47.484553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.484660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.484675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:56.004 [2024-12-11 13:20:47.484687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:56.004 [2024-12-11 13:20:47.484697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.496816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.496844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.004 [2024-12-11 13:20:47.496857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.096 ms 00:22:56.004 [2024-12-11 13:20:47.496883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.497008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.497025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.004 [2024-12-11 13:20:47.497036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:56.004 [2024-12-11 13:20:47.497047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.497082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.497094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:56.004 [2024-12-11 13:20:47.497104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:56.004 [2024-12-11 13:20:47.497115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.497164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:56.004 [2024-12-11 13:20:47.503119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.503157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.004 [2024-12-11 13:20:47.503185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.974 ms 00:22:56.004 [2024-12-11 13:20:47.503196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.503253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.503265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:56.004 [2024-12-11 13:20:47.503276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:56.004 [2024-12-11 13:20:47.503286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.503312] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:56.004 [2024-12-11 13:20:47.503341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:56.004 [2024-12-11 13:20:47.503379] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:56.004 [2024-12-11 13:20:47.503399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:56.004 [2024-12-11 13:20:47.503492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:56.004 [2024-12-11 13:20:47.503507] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:56.004 [2024-12-11 13:20:47.503521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:56.004 [2024-12-11 13:20:47.503538] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:56.004 [2024-12-11 13:20:47.503551] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:56.004 [2024-12-11 13:20:47.503563] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:56.004 [2024-12-11 13:20:47.503574] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:56.004 [2024-12-11 13:20:47.503584] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:56.004 [2024-12-11 13:20:47.503595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:56.004 [2024-12-11 13:20:47.503606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.503616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:56.004 [2024-12-11 13:20:47.503627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:22:56.004 [2024-12-11 13:20:47.503637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.503717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.004 [2024-12-11 13:20:47.503732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:56.004 [2024-12-11 13:20:47.503743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:56.004 [2024-12-11 13:20:47.503753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.004 [2024-12-11 13:20:47.503843] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:56.004 [2024-12-11 13:20:47.503856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:56.004 [2024-12-11 13:20:47.503868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.004 [2024-12-11 13:20:47.503879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.503890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:56.004 [2024-12-11 13:20:47.503899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.503909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:56.004 [2024-12-11 13:20:47.503919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:56.004 [2024-12-11 13:20:47.503930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:56.004 [2024-12-11 13:20:47.503940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.004 [2024-12-11 13:20:47.503952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:56.004 [2024-12-11 13:20:47.503974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:56.004 [2024-12-11 13:20:47.503983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.004 [2024-12-11 13:20:47.503992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:56.004 [2024-12-11 13:20:47.504002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:56.004 [2024-12-11 13:20:47.504012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:56.004 [2024-12-11 13:20:47.504032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:56.004 [2024-12-11 13:20:47.504041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:56.004 [2024-12-11 13:20:47.504061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.004 [2024-12-11 13:20:47.504079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:56.004 [2024-12-11 13:20:47.504089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.004 [2024-12-11 13:20:47.504107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:56.004 [2024-12-11 13:20:47.504116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.004 [2024-12-11 13:20:47.504146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:56.004 [2024-12-11 13:20:47.504156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.004 [2024-12-11 13:20:47.504174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:56.004 [2024-12-11 13:20:47.504183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.004 [2024-12-11 13:20:47.504203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:56.004 [2024-12-11 13:20:47.504213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:56.004 [2024-12-11 13:20:47.504221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.004 [2024-12-11 13:20:47.504230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:56.004 [2024-12-11 13:20:47.504240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:56.004 [2024-12-11 13:20:47.504249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:56.004 [2024-12-11 13:20:47.504269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:56.004 [2024-12-11 13:20:47.504279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504288] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:56.004 [2024-12-11 13:20:47.504298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:56.004 [2024-12-11 13:20:47.504311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.004 [2024-12-11 13:20:47.504321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.004 [2024-12-11 13:20:47.504331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:56.004 [2024-12-11 13:20:47.504341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:56.004 [2024-12-11 13:20:47.504350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:56.004 [2024-12-11 13:20:47.504360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:56.004 [2024-12-11 13:20:47.504369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:56.005 [2024-12-11 13:20:47.504378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:56.005 [2024-12-11 13:20:47.504390] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:56.005 [2024-12-11 13:20:47.504403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:56.005 [2024-12-11 13:20:47.504426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:56.005 [2024-12-11 13:20:47.504437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:56.005 [2024-12-11 13:20:47.504448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:56.005 [2024-12-11 13:20:47.504459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:56.005 [2024-12-11 13:20:47.504469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:56.005 [2024-12-11 13:20:47.504480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:56.005 [2024-12-11 13:20:47.504490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:56.005 [2024-12-11 13:20:47.504501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:56.005 [2024-12-11 13:20:47.504511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:56.005 [2024-12-11 13:20:47.504564] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:56.005 [2024-12-11 13:20:47.504576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:56.005 [2024-12-11 13:20:47.504599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:56.005 [2024-12-11 13:20:47.504609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:56.005 [2024-12-11 13:20:47.504619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:56.005 [2024-12-11 13:20:47.504630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.005 [2024-12-11 13:20:47.504645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:56.005 [2024-12-11 13:20:47.504655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:22:56.005 [2024-12-11 13:20:47.504665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.005 [2024-12-11 13:20:47.556414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.005 [2024-12-11 13:20:47.556630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.005 [2024-12-11 13:20:47.556754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.766 ms 00:22:56.005 [2024-12-11 13:20:47.556793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.005 [2024-12-11 13:20:47.557036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.005 [2024-12-11 13:20:47.557102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:56.005 [2024-12-11 13:20:47.557193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:56.005 [2024-12-11 13:20:47.557229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.620837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.621022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.265 [2024-12-11 13:20:47.621168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.656 ms 00:22:56.265 [2024-12-11 13:20:47.621211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.621364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.621499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.265 [2024-12-11 13:20:47.621600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:56.265 [2024-12-11 13:20:47.621632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.622415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.622527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.265 [2024-12-11 13:20:47.622603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:22:56.265 [2024-12-11 13:20:47.622644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.622823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.622861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.265 [2024-12-11 13:20:47.622933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:22:56.265 [2024-12-11 13:20:47.622968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.646276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.646419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.265 [2024-12-11 13:20:47.646497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.293 ms 00:22:56.265 [2024-12-11 13:20:47.646535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.667187] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:56.265 [2024-12-11 13:20:47.667359] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:56.265 [2024-12-11 13:20:47.667451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.667485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:56.265 [2024-12-11 13:20:47.667518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.766 ms 00:22:56.265 [2024-12-11 13:20:47.667549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.697911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.698070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:56.265 [2024-12-11 13:20:47.698094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.279 ms 00:22:56.265 [2024-12-11 13:20:47.698106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.717353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.717393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:56.265 [2024-12-11 13:20:47.717408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.101 ms 00:22:56.265 [2024-12-11 13:20:47.717434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.734999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.735056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:56.265 [2024-12-11 13:20:47.735072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.510 ms 00:22:56.265 [2024-12-11 13:20:47.735083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.735912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.735936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:56.265 [2024-12-11 13:20:47.735966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:22:56.265 [2024-12-11 13:20:47.735977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.265 [2024-12-11 13:20:47.829008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.265 [2024-12-11 13:20:47.829101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:56.265 [2024-12-11 13:20:47.829132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.150 ms 00:22:56.265 [2024-12-11 13:20:47.829145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.840478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:56.524 [2024-12-11 13:20:47.866257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.524 [2024-12-11 13:20:47.866329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:56.524 [2024-12-11 13:20:47.866347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.991 ms 00:22:56.524 [2024-12-11 13:20:47.866384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.866536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.524 [2024-12-11 13:20:47.866551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:56.524 [2024-12-11 13:20:47.866564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:56.524 [2024-12-11 13:20:47.866574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.866647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.524 [2024-12-11 13:20:47.866659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:56.524 [2024-12-11 13:20:47.866671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:56.524 [2024-12-11 13:20:47.866689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.866759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.524 [2024-12-11 13:20:47.866774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:56.524 [2024-12-11 13:20:47.866786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:56.524 [2024-12-11 13:20:47.866797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.866841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:56.524 [2024-12-11 13:20:47.866856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.524 [2024-12-11 13:20:47.866867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:56.524 [2024-12-11 13:20:47.866878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:56.524 [2024-12-11 13:20:47.866889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.904606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.524 [2024-12-11 13:20:47.904648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:56.524 [2024-12-11 13:20:47.904662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.743 ms 00:22:56.524 [2024-12-11 13:20:47.904689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.524 [2024-12-11 13:20:47.904813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.525 [2024-12-11 13:20:47.904827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:56.525 [2024-12-11 13:20:47.904838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:56.525 [2024-12-11 13:20:47.904849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.525 [2024-12-11 13:20:47.906175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.525 [2024-12-11 13:20:47.910525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 449.600 ms, result 0 00:22:56.525 [2024-12-11 13:20:47.911529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:56.525 [2024-12-11 13:20:47.929791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:57.461  [2024-12-11T13:20:50.006Z] Copying: 28/256 [MB] (28 MBps) [2024-12-11T13:20:51.385Z] Copying: 55/256 [MB] (26 MBps) [2024-12-11T13:20:52.321Z] Copying: 80/256 [MB] (24 MBps) [2024-12-11T13:20:53.258Z] Copying: 104/256 [MB] (24 MBps) [2024-12-11T13:20:54.195Z] Copying: 130/256 [MB] (25 MBps) [2024-12-11T13:20:55.132Z] Copying: 155/256 [MB] (25 MBps) [2024-12-11T13:20:56.069Z] Copying: 180/256 [MB] (24 MBps) [2024-12-11T13:20:57.006Z] Copying: 204/256 [MB] (24 MBps) [2024-12-11T13:20:58.385Z] Copying: 228/256 [MB] (24 MBps) [2024-12-11T13:20:58.385Z] Copying: 253/256 [MB] (24 MBps) [2024-12-11T13:20:58.644Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-11 13:20:58.416882] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.076 [2024-12-11 13:20:58.433635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.076 [2024-12-11 13:20:58.433827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.076 [2024-12-11 13:20:58.433866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:07.076 [2024-12-11 13:20:58.433878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.076 [2024-12-11 13:20:58.433932] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:07.076 [2024-12-11 13:20:58.439030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.076 [2024-12-11 13:20:58.439165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.076 [2024-12-11 13:20:58.439249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.083 ms 00:23:07.076 [2024-12-11 13:20:58.439286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.076 [2024-12-11 13:20:58.439594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.076 [2024-12-11 13:20:58.439643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.076 [2024-12-11 13:20:58.439720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:23:07.076 [2024-12-11 13:20:58.439739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.076 [2024-12-11 13:20:58.442644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.076 [2024-12-11 13:20:58.442690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.076 [2024-12-11 13:20:58.442703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:23:07.076 [2024-12-11 13:20:58.442713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.076 [2024-12-11 13:20:58.448597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.076 [2024-12-11 13:20:58.448731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:07.076 [2024-12-11 13:20:58.449023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:23:07.077 [2024-12-11 13:20:58.449035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.077 [2024-12-11 13:20:58.489724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.077 [2024-12-11 13:20:58.489781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.077 [2024-12-11 13:20:58.489798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.654 ms 00:23:07.077 [2024-12-11 13:20:58.489809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.077 [2024-12-11 13:20:58.511237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.077 [2024-12-11 13:20:58.511286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.077 [2024-12-11 13:20:58.511326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.382 ms 00:23:07.077 [2024-12-11 13:20:58.511338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.077 [2024-12-11 13:20:58.511514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.077 [2024-12-11 13:20:58.511528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.077 [2024-12-11 13:20:58.511553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:07.077 [2024-12-11 13:20:58.511564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.077 [2024-12-11 13:20:58.547505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.077 [2024-12-11 13:20:58.547566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:07.077 [2024-12-11 13:20:58.547580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.979 ms 00:23:07.077 [2024-12-11 13:20:58.547607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.077 [2024-12-11 13:20:58.582783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.077 [2024-12-11 13:20:58.582822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:07.077 [2024-12-11 13:20:58.582836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.173 ms 00:23:07.077 [2024-12-11 13:20:58.582847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.077 [2024-12-11 13:20:58.617702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.077 [2024-12-11 13:20:58.617874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:07.077 [2024-12-11 13:20:58.617897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.853 ms 00:23:07.077 [2024-12-11 13:20:58.617908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.337 [2024-12-11 13:20:58.652772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.337 [2024-12-11 13:20:58.652954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:07.337 [2024-12-11 13:20:58.652975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.830 ms 00:23:07.337 [2024-12-11 13:20:58.652987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.337 [2024-12-11 13:20:58.653046] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:07.337 [2024-12-11 13:20:58.653066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.653999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.654010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:07.338 [2024-12-11 13:20:58.654020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:07.339 [2024-12-11 13:20:58.654237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:07.339 [2024-12-11 13:20:58.654248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 540c5ffa-c404-4d48-a834-4d9cb8eefb38 00:23:07.339 [2024-12-11 13:20:58.654260] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:07.339 [2024-12-11 13:20:58.654270] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:07.339 [2024-12-11 13:20:58.654281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:07.339 [2024-12-11 13:20:58.654292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:07.339 [2024-12-11 13:20:58.654303] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:07.339 [2024-12-11 13:20:58.654313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:07.339 [2024-12-11 13:20:58.654329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:07.339 [2024-12-11 13:20:58.654338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:07.339 [2024-12-11 13:20:58.654348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:07.339 [2024-12-11 13:20:58.654358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.339 [2024-12-11 13:20:58.654369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:07.339 [2024-12-11 13:20:58.654381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:23:07.339 [2024-12-11 13:20:58.654392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.675037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.339 [2024-12-11 13:20:58.675208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:07.339 [2024-12-11 13:20:58.675229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.654 ms 00:23:07.339 [2024-12-11 13:20:58.675241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.675854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.339 [2024-12-11 13:20:58.675870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:07.339 [2024-12-11 13:20:58.675883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:23:07.339 [2024-12-11 13:20:58.675894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.735141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.339 [2024-12-11 13:20:58.735187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.339 [2024-12-11 13:20:58.735201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.339 [2024-12-11 13:20:58.735218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.735325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.339 [2024-12-11 13:20:58.735338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.339 [2024-12-11 13:20:58.735349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.339 [2024-12-11 13:20:58.735360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.735419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.339 [2024-12-11 13:20:58.735433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.339 [2024-12-11 13:20:58.735444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.339 [2024-12-11 13:20:58.735455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.735481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.339 [2024-12-11 13:20:58.735492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.339 [2024-12-11 13:20:58.735503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.339 [2024-12-11 13:20:58.735514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.339 [2024-12-11 13:20:58.868485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.339 [2024-12-11 13:20:58.868568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.339 [2024-12-11 13:20:58.868585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.339 [2024-12-11 13:20:58.868612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.974381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.974684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.599 [2024-12-11 13:20:58.974836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.974878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.975035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.975299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:07.599 [2024-12-11 13:20:58.975321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.975333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.975382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.975402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:07.599 [2024-12-11 13:20:58.975414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.975424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.975567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.975582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:07.599 [2024-12-11 13:20:58.975593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.975605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.975646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.975660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:07.599 [2024-12-11 13:20:58.975677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.975688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.975737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.975750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:07.599 [2024-12-11 13:20:58.975760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.975770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.975823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.599 [2024-12-11 13:20:58.975841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:07.599 [2024-12-11 13:20:58.975851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.599 [2024-12-11 13:20:58.975863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.599 [2024-12-11 13:20:58.976038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.303 ms, result 0 00:23:08.537 00:23:08.537 00:23:08.796 13:21:00 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:09.055 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:09.055 13:21:00 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:09.055 13:21:00 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:09.055 13:21:00 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:09.055 13:21:00 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:09.055 13:21:00 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:09.314 13:21:00 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:09.314 Process with pid 80190 is not found 00:23:09.314 13:21:00 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80190 00:23:09.314 13:21:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 80190 ']' 00:23:09.314 13:21:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 80190 00:23:09.314 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80190) - No such process 00:23:09.314 13:21:00 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 80190 is not found' 00:23:09.314 00:23:09.314 real 1m14.464s 00:23:09.314 user 1m41.251s 00:23:09.314 sys 0m8.239s 00:23:09.314 ************************************ 00:23:09.314 END TEST ftl_trim 00:23:09.314 ************************************ 00:23:09.314 13:21:00 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.314 13:21:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:09.314 13:21:00 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:09.314 13:21:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:09.314 13:21:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.314 13:21:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:09.314 ************************************ 00:23:09.314 START TEST ftl_restore 00:23:09.314 ************************************ 00:23:09.314 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:09.314 * Looking for test storage... 00:23:09.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.574 13:21:00 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:09.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.574 --rc genhtml_branch_coverage=1 00:23:09.574 --rc genhtml_function_coverage=1 00:23:09.574 --rc genhtml_legend=1 00:23:09.574 --rc geninfo_all_blocks=1 00:23:09.574 --rc geninfo_unexecuted_blocks=1 00:23:09.574 00:23:09.574 ' 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:09.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.574 --rc genhtml_branch_coverage=1 00:23:09.574 --rc genhtml_function_coverage=1 00:23:09.574 --rc genhtml_legend=1 00:23:09.574 --rc geninfo_all_blocks=1 00:23:09.574 --rc geninfo_unexecuted_blocks=1 00:23:09.574 00:23:09.574 ' 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:09.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.574 --rc genhtml_branch_coverage=1 00:23:09.574 --rc genhtml_function_coverage=1 00:23:09.574 --rc genhtml_legend=1 00:23:09.574 --rc geninfo_all_blocks=1 00:23:09.574 --rc geninfo_unexecuted_blocks=1 00:23:09.574 00:23:09.574 ' 00:23:09.574 13:21:00 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:09.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.574 --rc genhtml_branch_coverage=1 00:23:09.574 --rc genhtml_function_coverage=1 00:23:09.574 --rc genhtml_legend=1 00:23:09.574 --rc geninfo_all_blocks=1 00:23:09.574 --rc geninfo_unexecuted_blocks=1 00:23:09.574 00:23:09.574 ' 00:23:09.574 13:21:00 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:09.574 13:21:00 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:09.574 13:21:00 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.b1ZFzMQKWG 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80476 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:09.574 13:21:01 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80476 00:23:09.574 13:21:01 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 80476 ']' 00:23:09.574 13:21:01 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.574 13:21:01 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.574 13:21:01 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.574 13:21:01 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.574 13:21:01 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:09.833 [2024-12-11 13:21:01.148712] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:23:09.833 [2024-12-11 13:21:01.148863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80476 ] 00:23:09.833 [2024-12-11 13:21:01.337669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.092 [2024-12-11 13:21:01.482046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.030 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.030 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:11.030 13:21:02 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:11.030 13:21:02 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:11.030 13:21:02 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:11.030 13:21:02 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:11.030 13:21:02 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:11.030 13:21:02 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:11.288 13:21:02 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:11.288 13:21:02 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:11.288 13:21:02 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:11.288 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:11.288 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:11.288 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:11.288 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:11.288 13:21:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:11.548 { 00:23:11.548 "name": "nvme0n1", 00:23:11.548 "aliases": [ 00:23:11.548 "1082af6a-612d-409f-90c1-12edc38332d5" 00:23:11.548 ], 00:23:11.548 "product_name": "NVMe disk", 00:23:11.548 "block_size": 4096, 00:23:11.548 "num_blocks": 1310720, 00:23:11.548 "uuid": "1082af6a-612d-409f-90c1-12edc38332d5", 00:23:11.548 "numa_id": -1, 00:23:11.548 "assigned_rate_limits": { 00:23:11.548 "rw_ios_per_sec": 0, 00:23:11.548 "rw_mbytes_per_sec": 0, 00:23:11.548 "r_mbytes_per_sec": 0, 00:23:11.548 "w_mbytes_per_sec": 0 00:23:11.548 }, 00:23:11.548 "claimed": true, 00:23:11.548 "claim_type": "read_many_write_one", 00:23:11.548 "zoned": false, 00:23:11.548 "supported_io_types": { 00:23:11.548 "read": true, 00:23:11.548 "write": true, 00:23:11.548 "unmap": true, 00:23:11.548 "flush": true, 00:23:11.548 "reset": true, 00:23:11.548 "nvme_admin": true, 00:23:11.548 "nvme_io": true, 00:23:11.548 "nvme_io_md": false, 00:23:11.548 "write_zeroes": true, 00:23:11.548 "zcopy": false, 00:23:11.548 "get_zone_info": false, 00:23:11.548 "zone_management": false, 00:23:11.548 "zone_append": false, 00:23:11.548 "compare": true, 00:23:11.548 "compare_and_write": false, 00:23:11.548 "abort": true, 00:23:11.548 "seek_hole": false, 00:23:11.548 "seek_data": false, 00:23:11.548 "copy": true, 00:23:11.548 "nvme_iov_md": false 00:23:11.548 }, 00:23:11.548 "driver_specific": { 00:23:11.548 "nvme": [ 00:23:11.548 { 00:23:11.548 "pci_address": "0000:00:11.0", 00:23:11.548 "trid": { 00:23:11.548 "trtype": "PCIe", 00:23:11.548 "traddr": "0000:00:11.0" 00:23:11.548 }, 00:23:11.548 "ctrlr_data": { 00:23:11.548 "cntlid": 0, 00:23:11.548 "vendor_id": "0x1b36", 00:23:11.548 "model_number": "QEMU NVMe Ctrl", 00:23:11.548 "serial_number": "12341", 00:23:11.548 "firmware_revision": "8.0.0", 00:23:11.548 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:11.548 "oacs": { 00:23:11.548 "security": 0, 00:23:11.548 "format": 1, 00:23:11.548 "firmware": 0, 00:23:11.548 "ns_manage": 1 00:23:11.548 }, 00:23:11.548 "multi_ctrlr": false, 00:23:11.548 "ana_reporting": false 00:23:11.548 }, 00:23:11.548 "vs": { 00:23:11.548 "nvme_version": "1.4" 00:23:11.548 }, 00:23:11.548 "ns_data": { 00:23:11.548 "id": 1, 00:23:11.548 "can_share": false 00:23:11.548 } 00:23:11.548 } 00:23:11.548 ], 00:23:11.548 "mp_policy": "active_passive" 00:23:11.548 } 00:23:11.548 } 00:23:11.548 ]' 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:11.548 13:21:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:11.548 13:21:03 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:11.548 13:21:03 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:11.548 13:21:03 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:11.548 13:21:03 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:11.548 13:21:03 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:11.847 13:21:03 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=27bc1867-2df9-49a4-880a-2936b911cb4c 00:23:11.847 13:21:03 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:11.847 13:21:03 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27bc1867-2df9-49a4-880a-2936b911cb4c 00:23:12.132 13:21:03 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:12.391 13:21:03 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=e36bdb91-bc6d-4601-8fb1-84d880e5ef7a 00:23:12.391 13:21:03 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e36bdb91-bc6d-4601-8fb1-84d880e5ef7a 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:12.650 13:21:04 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:12.650 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:12.650 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:12.650 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:12.650 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:12.650 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:12.909 { 00:23:12.909 "name": "0494db60-fc1e-4c05-ac26-4d58275772f1", 00:23:12.909 "aliases": [ 00:23:12.909 "lvs/nvme0n1p0" 00:23:12.909 ], 00:23:12.909 "product_name": "Logical Volume", 00:23:12.909 "block_size": 4096, 00:23:12.909 "num_blocks": 26476544, 00:23:12.909 "uuid": "0494db60-fc1e-4c05-ac26-4d58275772f1", 00:23:12.909 "assigned_rate_limits": { 00:23:12.909 "rw_ios_per_sec": 0, 00:23:12.909 "rw_mbytes_per_sec": 0, 00:23:12.909 "r_mbytes_per_sec": 0, 00:23:12.909 "w_mbytes_per_sec": 0 00:23:12.909 }, 00:23:12.909 "claimed": false, 00:23:12.909 "zoned": false, 00:23:12.909 "supported_io_types": { 00:23:12.909 "read": true, 00:23:12.909 "write": true, 00:23:12.909 "unmap": true, 00:23:12.909 "flush": false, 00:23:12.909 "reset": true, 00:23:12.909 "nvme_admin": false, 00:23:12.909 "nvme_io": false, 00:23:12.909 "nvme_io_md": false, 00:23:12.909 "write_zeroes": true, 00:23:12.909 "zcopy": false, 00:23:12.909 "get_zone_info": false, 00:23:12.909 "zone_management": false, 00:23:12.909 "zone_append": false, 00:23:12.909 "compare": false, 00:23:12.909 "compare_and_write": false, 00:23:12.909 "abort": false, 00:23:12.909 "seek_hole": true, 00:23:12.909 "seek_data": true, 00:23:12.909 "copy": false, 00:23:12.909 "nvme_iov_md": false 00:23:12.909 }, 00:23:12.909 "driver_specific": { 00:23:12.909 "lvol": { 00:23:12.909 "lvol_store_uuid": "e36bdb91-bc6d-4601-8fb1-84d880e5ef7a", 00:23:12.909 "base_bdev": "nvme0n1", 00:23:12.909 "thin_provision": true, 00:23:12.909 "num_allocated_clusters": 0, 00:23:12.909 "snapshot": false, 00:23:12.909 "clone": false, 00:23:12.909 "esnap_clone": false 00:23:12.909 } 00:23:12.909 } 00:23:12.909 } 00:23:12.909 ]' 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:12.909 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:12.909 13:21:04 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:12.909 13:21:04 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:12.909 13:21:04 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:13.168 13:21:04 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:13.168 13:21:04 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:13.168 13:21:04 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:13.168 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:13.168 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:13.168 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:13.168 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:13.168 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:13.427 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:13.427 { 00:23:13.427 "name": "0494db60-fc1e-4c05-ac26-4d58275772f1", 00:23:13.427 "aliases": [ 00:23:13.427 "lvs/nvme0n1p0" 00:23:13.427 ], 00:23:13.427 "product_name": "Logical Volume", 00:23:13.427 "block_size": 4096, 00:23:13.427 "num_blocks": 26476544, 00:23:13.427 "uuid": "0494db60-fc1e-4c05-ac26-4d58275772f1", 00:23:13.427 "assigned_rate_limits": { 00:23:13.427 "rw_ios_per_sec": 0, 00:23:13.427 "rw_mbytes_per_sec": 0, 00:23:13.427 "r_mbytes_per_sec": 0, 00:23:13.427 "w_mbytes_per_sec": 0 00:23:13.427 }, 00:23:13.427 "claimed": false, 00:23:13.427 "zoned": false, 00:23:13.427 "supported_io_types": { 00:23:13.427 "read": true, 00:23:13.427 "write": true, 00:23:13.427 "unmap": true, 00:23:13.427 "flush": false, 00:23:13.427 "reset": true, 00:23:13.427 "nvme_admin": false, 00:23:13.427 "nvme_io": false, 00:23:13.427 "nvme_io_md": false, 00:23:13.427 "write_zeroes": true, 00:23:13.427 "zcopy": false, 00:23:13.427 "get_zone_info": false, 00:23:13.427 "zone_management": false, 00:23:13.427 "zone_append": false, 00:23:13.427 "compare": false, 00:23:13.427 "compare_and_write": false, 00:23:13.427 "abort": false, 00:23:13.427 "seek_hole": true, 00:23:13.427 "seek_data": true, 00:23:13.427 "copy": false, 00:23:13.427 "nvme_iov_md": false 00:23:13.427 }, 00:23:13.427 "driver_specific": { 00:23:13.427 "lvol": { 00:23:13.427 "lvol_store_uuid": "e36bdb91-bc6d-4601-8fb1-84d880e5ef7a", 00:23:13.427 "base_bdev": "nvme0n1", 00:23:13.427 "thin_provision": true, 00:23:13.427 "num_allocated_clusters": 0, 00:23:13.428 "snapshot": false, 00:23:13.428 "clone": false, 00:23:13.428 "esnap_clone": false 00:23:13.428 } 00:23:13.428 } 00:23:13.428 } 00:23:13.428 ]' 00:23:13.428 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:13.428 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:13.428 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:13.428 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:13.428 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:13.428 13:21:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:13.428 13:21:04 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:13.428 13:21:04 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:13.688 13:21:05 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:13.688 13:21:05 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:13.688 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:13.688 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:13.688 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:13.688 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:13.688 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0494db60-fc1e-4c05-ac26-4d58275772f1 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:13.946 { 00:23:13.946 "name": "0494db60-fc1e-4c05-ac26-4d58275772f1", 00:23:13.946 "aliases": [ 00:23:13.946 "lvs/nvme0n1p0" 00:23:13.946 ], 00:23:13.946 "product_name": "Logical Volume", 00:23:13.946 "block_size": 4096, 00:23:13.946 "num_blocks": 26476544, 00:23:13.946 "uuid": "0494db60-fc1e-4c05-ac26-4d58275772f1", 00:23:13.946 "assigned_rate_limits": { 00:23:13.946 "rw_ios_per_sec": 0, 00:23:13.946 "rw_mbytes_per_sec": 0, 00:23:13.946 "r_mbytes_per_sec": 0, 00:23:13.946 "w_mbytes_per_sec": 0 00:23:13.946 }, 00:23:13.946 "claimed": false, 00:23:13.946 "zoned": false, 00:23:13.946 "supported_io_types": { 00:23:13.946 "read": true, 00:23:13.946 "write": true, 00:23:13.946 "unmap": true, 00:23:13.946 "flush": false, 00:23:13.946 "reset": true, 00:23:13.946 "nvme_admin": false, 00:23:13.946 "nvme_io": false, 00:23:13.946 "nvme_io_md": false, 00:23:13.946 "write_zeroes": true, 00:23:13.946 "zcopy": false, 00:23:13.946 "get_zone_info": false, 00:23:13.946 "zone_management": false, 00:23:13.946 "zone_append": false, 00:23:13.946 "compare": false, 00:23:13.946 "compare_and_write": false, 00:23:13.946 "abort": false, 00:23:13.946 "seek_hole": true, 00:23:13.946 "seek_data": true, 00:23:13.946 "copy": false, 00:23:13.946 "nvme_iov_md": false 00:23:13.946 }, 00:23:13.946 "driver_specific": { 00:23:13.946 "lvol": { 00:23:13.946 "lvol_store_uuid": "e36bdb91-bc6d-4601-8fb1-84d880e5ef7a", 00:23:13.946 "base_bdev": "nvme0n1", 00:23:13.946 "thin_provision": true, 00:23:13.946 "num_allocated_clusters": 0, 00:23:13.946 "snapshot": false, 00:23:13.946 "clone": false, 00:23:13.946 "esnap_clone": false 00:23:13.946 } 00:23:13.946 } 00:23:13.946 } 00:23:13.946 ]' 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:13.946 13:21:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0494db60-fc1e-4c05-ac26-4d58275772f1 --l2p_dram_limit 10' 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:13.946 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:13.946 13:21:05 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0494db60-fc1e-4c05-ac26-4d58275772f1 --l2p_dram_limit 10 -c nvc0n1p0 00:23:14.206 [2024-12-11 13:21:05.582493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.582559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.206 [2024-12-11 13:21:05.582583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:14.206 [2024-12-11 13:21:05.582595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.582676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.582689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.206 [2024-12-11 13:21:05.582703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:14.206 [2024-12-11 13:21:05.582713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.582747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.206 [2024-12-11 13:21:05.583911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.206 [2024-12-11 13:21:05.583951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.583964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.206 [2024-12-11 13:21:05.583979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:23:14.206 [2024-12-11 13:21:05.583990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.584095] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1ac27823-e4d5-46f8-ad78-b95cbb6bd09a 00:23:14.206 [2024-12-11 13:21:05.586433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.586474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:14.206 [2024-12-11 13:21:05.586488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:14.206 [2024-12-11 13:21:05.586502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.599901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.599948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.206 [2024-12-11 13:21:05.599963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.343 ms 00:23:14.206 [2024-12-11 13:21:05.599978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.600102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.600138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.206 [2024-12-11 13:21:05.600151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:14.206 [2024-12-11 13:21:05.600170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.600250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.600268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.206 [2024-12-11 13:21:05.600279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:14.206 [2024-12-11 13:21:05.600298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.600327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.206 [2024-12-11 13:21:05.606098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.606137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.206 [2024-12-11 13:21:05.606155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.785 ms 00:23:14.206 [2024-12-11 13:21:05.606166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.606214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.606226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.206 [2024-12-11 13:21:05.606240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:14.206 [2024-12-11 13:21:05.606251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.606294] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:14.206 [2024-12-11 13:21:05.606443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.206 [2024-12-11 13:21:05.606466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.206 [2024-12-11 13:21:05.606481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:14.206 [2024-12-11 13:21:05.606498] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.206 [2024-12-11 13:21:05.606511] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.206 [2024-12-11 13:21:05.606526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:14.206 [2024-12-11 13:21:05.606537] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.206 [2024-12-11 13:21:05.606555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.206 [2024-12-11 13:21:05.606566] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.206 [2024-12-11 13:21:05.606580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.606603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.206 [2024-12-11 13:21:05.606626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:14.206 [2024-12-11 13:21:05.606636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.606718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.206 [2024-12-11 13:21:05.606729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.206 [2024-12-11 13:21:05.606742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:14.206 [2024-12-11 13:21:05.606752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.206 [2024-12-11 13:21:05.606851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.206 [2024-12-11 13:21:05.606863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.206 [2024-12-11 13:21:05.606877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.206 [2024-12-11 13:21:05.606887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.206 [2024-12-11 13:21:05.606901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.206 [2024-12-11 13:21:05.606910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.206 [2024-12-11 13:21:05.606922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:14.206 [2024-12-11 13:21:05.606931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.207 [2024-12-11 13:21:05.606943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:14.207 [2024-12-11 13:21:05.606954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.207 [2024-12-11 13:21:05.606969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.207 [2024-12-11 13:21:05.606979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:14.207 [2024-12-11 13:21:05.606991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.207 [2024-12-11 13:21:05.607000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.207 [2024-12-11 13:21:05.607013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:14.207 [2024-12-11 13:21:05.607023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.207 [2024-12-11 13:21:05.607048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.207 [2024-12-11 13:21:05.607083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.207 [2024-12-11 13:21:05.607125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.207 [2024-12-11 13:21:05.607160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.207 [2024-12-11 13:21:05.607191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.207 [2024-12-11 13:21:05.607228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.207 [2024-12-11 13:21:05.607249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.207 [2024-12-11 13:21:05.607258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:14.207 [2024-12-11 13:21:05.607272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.207 [2024-12-11 13:21:05.607281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.207 [2024-12-11 13:21:05.607293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:14.207 [2024-12-11 13:21:05.607302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.207 [2024-12-11 13:21:05.607324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:14.207 [2024-12-11 13:21:05.607336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.207 [2024-12-11 13:21:05.607359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.207 [2024-12-11 13:21:05.607368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.207 [2024-12-11 13:21:05.607392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.207 [2024-12-11 13:21:05.607408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.207 [2024-12-11 13:21:05.607417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.207 [2024-12-11 13:21:05.607430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.207 [2024-12-11 13:21:05.607439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.207 [2024-12-11 13:21:05.607452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.207 [2024-12-11 13:21:05.607463] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.207 [2024-12-11 13:21:05.607479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:14.207 [2024-12-11 13:21:05.607508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:14.207 [2024-12-11 13:21:05.607519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:14.207 [2024-12-11 13:21:05.607534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:14.207 [2024-12-11 13:21:05.607544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:14.207 [2024-12-11 13:21:05.607558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:14.207 [2024-12-11 13:21:05.607568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:14.207 [2024-12-11 13:21:05.607583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:14.207 [2024-12-11 13:21:05.607593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:14.207 [2024-12-11 13:21:05.607610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:14.207 [2024-12-11 13:21:05.607667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.207 [2024-12-11 13:21:05.607681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.207 [2024-12-11 13:21:05.607707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.207 [2024-12-11 13:21:05.607717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.207 [2024-12-11 13:21:05.607731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.207 [2024-12-11 13:21:05.607741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.207 [2024-12-11 13:21:05.607755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.207 [2024-12-11 13:21:05.607765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:23:14.207 [2024-12-11 13:21:05.607778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.207 [2024-12-11 13:21:05.607828] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:14.207 [2024-12-11 13:21:05.607847] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:17.498 [2024-12-11 13:21:08.895556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.498 [2024-12-11 13:21:08.895667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:17.498 [2024-12-11 13:21:08.895687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3293.064 ms 00:23:17.498 [2024-12-11 13:21:08.895703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-11 13:21:08.943455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.498 [2024-12-11 13:21:08.943549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:17.498 [2024-12-11 13:21:08.943568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.433 ms 00:23:17.498 [2024-12-11 13:21:08.943583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-11 13:21:08.943776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.498 [2024-12-11 13:21:08.943793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:17.498 [2024-12-11 13:21:08.943805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:17.498 [2024-12-11 13:21:08.943828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.499 [2024-12-11 13:21:08.996840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.499 [2024-12-11 13:21:08.996925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:17.499 [2024-12-11 13:21:08.996941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.026 ms 00:23:17.499 [2024-12-11 13:21:08.996956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.499 [2024-12-11 13:21:08.997014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.499 [2024-12-11 13:21:08.997036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:17.499 [2024-12-11 13:21:08.997048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:17.499 [2024-12-11 13:21:08.997074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.499 [2024-12-11 13:21:08.997916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.499 [2024-12-11 13:21:08.997944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:17.499 [2024-12-11 13:21:08.997956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:23:17.499 [2024-12-11 13:21:08.997971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.499 [2024-12-11 13:21:08.998089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.499 [2024-12-11 13:21:08.998104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:17.499 [2024-12-11 13:21:08.998135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:17.499 [2024-12-11 13:21:08.998153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.499 [2024-12-11 13:21:09.023603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.499 [2024-12-11 13:21:09.023657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:17.499 [2024-12-11 13:21:09.023674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.466 ms 00:23:17.499 [2024-12-11 13:21:09.023688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.499 [2024-12-11 13:21:09.048215] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:17.499 [2024-12-11 13:21:09.053515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.499 [2024-12-11 13:21:09.053561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:17.499 [2024-12-11 13:21:09.053581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.742 ms 00:23:17.499 [2024-12-11 13:21:09.053592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.758 [2024-12-11 13:21:09.146984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.758 [2024-12-11 13:21:09.147071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:17.758 [2024-12-11 13:21:09.147094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.479 ms 00:23:17.758 [2024-12-11 13:21:09.147122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.758 [2024-12-11 13:21:09.147356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.758 [2024-12-11 13:21:09.147378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:17.758 [2024-12-11 13:21:09.147398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:23:17.758 [2024-12-11 13:21:09.147409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.758 [2024-12-11 13:21:09.184591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.758 [2024-12-11 13:21:09.184647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:17.758 [2024-12-11 13:21:09.184685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.177 ms 00:23:17.758 [2024-12-11 13:21:09.184696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.758 [2024-12-11 13:21:09.221939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.758 [2024-12-11 13:21:09.221991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:17.758 [2024-12-11 13:21:09.222013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.241 ms 00:23:17.759 [2024-12-11 13:21:09.222024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.759 [2024-12-11 13:21:09.222782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.759 [2024-12-11 13:21:09.222807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:17.759 [2024-12-11 13:21:09.222824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:23:17.759 [2024-12-11 13:21:09.222839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.326750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.018 [2024-12-11 13:21:09.326830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:18.018 [2024-12-11 13:21:09.326860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.003 ms 00:23:18.018 [2024-12-11 13:21:09.326872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.368300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.018 [2024-12-11 13:21:09.368368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:18.018 [2024-12-11 13:21:09.368406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.354 ms 00:23:18.018 [2024-12-11 13:21:09.368418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.407763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.018 [2024-12-11 13:21:09.407823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:18.018 [2024-12-11 13:21:09.407843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.348 ms 00:23:18.018 [2024-12-11 13:21:09.407870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.444579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.018 [2024-12-11 13:21:09.444629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:18.018 [2024-12-11 13:21:09.444649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.712 ms 00:23:18.018 [2024-12-11 13:21:09.444676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.444734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.018 [2024-12-11 13:21:09.444747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:18.018 [2024-12-11 13:21:09.444767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:18.018 [2024-12-11 13:21:09.444777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.444904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.018 [2024-12-11 13:21:09.444922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:18.018 [2024-12-11 13:21:09.444937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:18.018 [2024-12-11 13:21:09.444947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.018 [2024-12-11 13:21:09.446372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3869.646 ms, result 0 00:23:18.018 { 00:23:18.018 "name": "ftl0", 00:23:18.018 "uuid": "1ac27823-e4d5-46f8-ad78-b95cbb6bd09a" 00:23:18.018 } 00:23:18.018 13:21:09 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:18.018 13:21:09 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:18.278 13:21:09 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:18.278 13:21:09 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:18.536 [2024-12-11 13:21:09.880695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.880779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:18.536 [2024-12-11 13:21:09.880797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.536 [2024-12-11 13:21:09.880829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.880858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:18.536 [2024-12-11 13:21:09.885599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.885637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:18.536 [2024-12-11 13:21:09.885655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:23:18.536 [2024-12-11 13:21:09.885666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.885961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.885981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:18.536 [2024-12-11 13:21:09.885997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:23:18.536 [2024-12-11 13:21:09.886007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.888560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.888594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:18.536 [2024-12-11 13:21:09.888608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:23:18.536 [2024-12-11 13:21:09.888634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.893695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.893731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:18.536 [2024-12-11 13:21:09.893752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.041 ms 00:23:18.536 [2024-12-11 13:21:09.893763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.931668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.931718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:18.536 [2024-12-11 13:21:09.931737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.875 ms 00:23:18.536 [2024-12-11 13:21:09.931764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.954373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.954418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:18.536 [2024-12-11 13:21:09.954453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.589 ms 00:23:18.536 [2024-12-11 13:21:09.954465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-12-11 13:21:09.954642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-12-11 13:21:09.954657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:18.536 [2024-12-11 13:21:09.954672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:23:18.536 [2024-12-11 13:21:09.954683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.537 [2024-12-11 13:21:09.991242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.537 [2024-12-11 13:21:09.991306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:18.537 [2024-12-11 13:21:09.991326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.587 ms 00:23:18.537 [2024-12-11 13:21:09.991337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.537 [2024-12-11 13:21:10.028312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.537 [2024-12-11 13:21:10.028362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:18.537 [2024-12-11 13:21:10.028397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.967 ms 00:23:18.537 [2024-12-11 13:21:10.028408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.537 [2024-12-11 13:21:10.064769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.537 [2024-12-11 13:21:10.064817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:18.537 [2024-12-11 13:21:10.064836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.361 ms 00:23:18.537 [2024-12-11 13:21:10.064847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-12-11 13:21:10.102939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-12-11 13:21:10.102996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:18.797 [2024-12-11 13:21:10.103016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.030 ms 00:23:18.797 [2024-12-11 13:21:10.103027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-12-11 13:21:10.103082] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:18.797 [2024-12-11 13:21:10.103104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.103993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:18.797 [2024-12-11 13:21:10.104251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:18.798 [2024-12-11 13:21:10.104669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:18.798 [2024-12-11 13:21:10.104683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ac27823-e4d5-46f8-ad78-b95cbb6bd09a 00:23:18.798 [2024-12-11 13:21:10.104695] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:18.798 [2024-12-11 13:21:10.104711] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:18.798 [2024-12-11 13:21:10.104725] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:18.798 [2024-12-11 13:21:10.104740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:18.798 [2024-12-11 13:21:10.104749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:18.798 [2024-12-11 13:21:10.104763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:18.798 [2024-12-11 13:21:10.104773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:18.798 [2024-12-11 13:21:10.104786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:18.798 [2024-12-11 13:21:10.104794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:18.798 [2024-12-11 13:21:10.104808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.798 [2024-12-11 13:21:10.104819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:18.798 [2024-12-11 13:21:10.104835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:23:18.798 [2024-12-11 13:21:10.104849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.126331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.798 [2024-12-11 13:21:10.126378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:18.798 [2024-12-11 13:21:10.126396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.424 ms 00:23:18.798 [2024-12-11 13:21:10.126408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.127039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.798 [2024-12-11 13:21:10.127060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:18.798 [2024-12-11 13:21:10.127080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:23:18.798 [2024-12-11 13:21:10.127091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.196742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.798 [2024-12-11 13:21:10.196837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.798 [2024-12-11 13:21:10.196858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.798 [2024-12-11 13:21:10.196870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.196973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.798 [2024-12-11 13:21:10.196986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.798 [2024-12-11 13:21:10.197004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.798 [2024-12-11 13:21:10.197015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.197165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.798 [2024-12-11 13:21:10.197181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.798 [2024-12-11 13:21:10.197196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.798 [2024-12-11 13:21:10.197206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.197238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.798 [2024-12-11 13:21:10.197249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.798 [2024-12-11 13:21:10.197262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.798 [2024-12-11 13:21:10.197276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.798 [2024-12-11 13:21:10.333206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.798 [2024-12-11 13:21:10.333305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.798 [2024-12-11 13:21:10.333325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.798 [2024-12-11 13:21:10.333338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.439612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.439696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.058 [2024-12-11 13:21:10.439717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.439733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.439887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.439900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.058 [2024-12-11 13:21:10.439915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.439926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.439999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.440011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.058 [2024-12-11 13:21:10.440025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.440037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.440188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.440203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.058 [2024-12-11 13:21:10.440217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.440228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.440281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.440295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:19.058 [2024-12-11 13:21:10.440309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.440319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.440372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.440383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.058 [2024-12-11 13:21:10.440398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.440408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.440466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.058 [2024-12-11 13:21:10.440478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.058 [2024-12-11 13:21:10.440492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.058 [2024-12-11 13:21:10.440502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.058 [2024-12-11 13:21:10.440667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.834 ms, result 0 00:23:19.058 true 00:23:19.058 13:21:10 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80476 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80476 ']' 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80476 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80476 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.058 killing process with pid 80476 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80476' 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 80476 00:23:19.058 13:21:10 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 80476 00:23:24.327 13:21:15 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:28.515 262144+0 records in 00:23:28.515 262144+0 records out 00:23:28.515 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.3184 s, 249 MB/s 00:23:28.515 13:21:19 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:30.462 13:21:21 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:30.462 [2024-12-11 13:21:21.767214] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:23:30.462 [2024-12-11 13:21:21.767360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80725 ] 00:23:30.462 [2024-12-11 13:21:21.961064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.721 [2024-12-11 13:21:22.110812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.291 [2024-12-11 13:21:22.547493] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.291 [2024-12-11 13:21:22.547574] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.291 [2024-12-11 13:21:22.720824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.720888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:31.291 [2024-12-11 13:21:22.720923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:31.291 [2024-12-11 13:21:22.720934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.720990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.721007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.291 [2024-12-11 13:21:22.721018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:31.291 [2024-12-11 13:21:22.721029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.721051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:31.291 [2024-12-11 13:21:22.721979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:31.291 [2024-12-11 13:21:22.722008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.722020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.291 [2024-12-11 13:21:22.722031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:23:31.291 [2024-12-11 13:21:22.722042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.724526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:31.291 [2024-12-11 13:21:22.745547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.745608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:31.291 [2024-12-11 13:21:22.745625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.050 ms 00:23:31.291 [2024-12-11 13:21:22.745638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.745732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.745747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:31.291 [2024-12-11 13:21:22.745759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:31.291 [2024-12-11 13:21:22.745770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.758305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.758340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.291 [2024-12-11 13:21:22.758355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.472 ms 00:23:31.291 [2024-12-11 13:21:22.758376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.758497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.758513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.291 [2024-12-11 13:21:22.758525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:31.291 [2024-12-11 13:21:22.758536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.758606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.758619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:31.291 [2024-12-11 13:21:22.758630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:31.291 [2024-12-11 13:21:22.758641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.758680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:31.291 [2024-12-11 13:21:22.764399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.764430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.291 [2024-12-11 13:21:22.764450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.739 ms 00:23:31.291 [2024-12-11 13:21:22.764461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.764502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.291 [2024-12-11 13:21:22.764515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:31.291 [2024-12-11 13:21:22.764526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:31.291 [2024-12-11 13:21:22.764537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.291 [2024-12-11 13:21:22.764580] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:31.291 [2024-12-11 13:21:22.764614] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:31.291 [2024-12-11 13:21:22.764654] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:31.291 [2024-12-11 13:21:22.764680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:31.291 [2024-12-11 13:21:22.764775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:31.291 [2024-12-11 13:21:22.764789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:31.292 [2024-12-11 13:21:22.764804] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:31.292 [2024-12-11 13:21:22.764819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:31.292 [2024-12-11 13:21:22.764832] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:31.292 [2024-12-11 13:21:22.764844] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:31.292 [2024-12-11 13:21:22.764856] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:31.292 [2024-12-11 13:21:22.764866] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:31.292 [2024-12-11 13:21:22.764884] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:31.292 [2024-12-11 13:21:22.764896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.292 [2024-12-11 13:21:22.764906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:31.292 [2024-12-11 13:21:22.764917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:31.292 [2024-12-11 13:21:22.764927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.292 [2024-12-11 13:21:22.765000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.292 [2024-12-11 13:21:22.765012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:31.292 [2024-12-11 13:21:22.765023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:31.292 [2024-12-11 13:21:22.765033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.292 [2024-12-11 13:21:22.765143] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:31.292 [2024-12-11 13:21:22.765158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:31.292 [2024-12-11 13:21:22.765170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:31.292 [2024-12-11 13:21:22.765203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:31.292 [2024-12-11 13:21:22.765234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.292 [2024-12-11 13:21:22.765257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:31.292 [2024-12-11 13:21:22.765267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:31.292 [2024-12-11 13:21:22.765278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.292 [2024-12-11 13:21:22.765304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:31.292 [2024-12-11 13:21:22.765315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:31.292 [2024-12-11 13:21:22.765325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:31.292 [2024-12-11 13:21:22.765345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:31.292 [2024-12-11 13:21:22.765374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:31.292 [2024-12-11 13:21:22.765403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:31.292 [2024-12-11 13:21:22.765431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:31.292 [2024-12-11 13:21:22.765459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:31.292 [2024-12-11 13:21:22.765486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.292 [2024-12-11 13:21:22.765504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:31.292 [2024-12-11 13:21:22.765513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:31.292 [2024-12-11 13:21:22.765522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.292 [2024-12-11 13:21:22.765531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:31.292 [2024-12-11 13:21:22.765540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:31.292 [2024-12-11 13:21:22.765557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:31.292 [2024-12-11 13:21:22.765587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:31.292 [2024-12-11 13:21:22.765598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765608] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:31.292 [2024-12-11 13:21:22.765619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:31.292 [2024-12-11 13:21:22.765629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.292 [2024-12-11 13:21:22.765650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:31.292 [2024-12-11 13:21:22.765660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:31.292 [2024-12-11 13:21:22.765670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:31.292 [2024-12-11 13:21:22.765682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:31.292 [2024-12-11 13:21:22.765692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:31.292 [2024-12-11 13:21:22.765701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:31.292 [2024-12-11 13:21:22.765713] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:31.292 [2024-12-11 13:21:22.765725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:31.292 [2024-12-11 13:21:22.765756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:31.292 [2024-12-11 13:21:22.765767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:31.292 [2024-12-11 13:21:22.765778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:31.292 [2024-12-11 13:21:22.765789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:31.292 [2024-12-11 13:21:22.765800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:31.292 [2024-12-11 13:21:22.765812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:31.292 [2024-12-11 13:21:22.765823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:31.292 [2024-12-11 13:21:22.765833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:31.292 [2024-12-11 13:21:22.765844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:31.292 [2024-12-11 13:21:22.765897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:31.292 [2024-12-11 13:21:22.765909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:31.292 [2024-12-11 13:21:22.765932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:31.292 [2024-12-11 13:21:22.765944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:31.292 [2024-12-11 13:21:22.765955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:31.292 [2024-12-11 13:21:22.765966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.292 [2024-12-11 13:21:22.765978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:31.292 [2024-12-11 13:21:22.765989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:23:31.292 [2024-12-11 13:21:22.765999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.292 [2024-12-11 13:21:22.816408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.292 [2024-12-11 13:21:22.816462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.292 [2024-12-11 13:21:22.816481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.429 ms 00:23:31.292 [2024-12-11 13:21:22.816501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.292 [2024-12-11 13:21:22.816612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.292 [2024-12-11 13:21:22.816625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:31.292 [2024-12-11 13:21:22.816636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:31.292 [2024-12-11 13:21:22.816648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.879383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.879440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.553 [2024-12-11 13:21:22.879457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.718 ms 00:23:31.553 [2024-12-11 13:21:22.879468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.879541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.879558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.553 [2024-12-11 13:21:22.879570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:31.553 [2024-12-11 13:21:22.879581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.880426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.880446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.553 [2024-12-11 13:21:22.880459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:23:31.553 [2024-12-11 13:21:22.880470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.880615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.880629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.553 [2024-12-11 13:21:22.880645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:23:31.553 [2024-12-11 13:21:22.880656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.904163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.904216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.553 [2024-12-11 13:21:22.904233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.520 ms 00:23:31.553 [2024-12-11 13:21:22.904244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.924653] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:31.553 [2024-12-11 13:21:22.924695] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:31.553 [2024-12-11 13:21:22.924712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.924725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:31.553 [2024-12-11 13:21:22.924738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.340 ms 00:23:31.553 [2024-12-11 13:21:22.924748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.954535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.954605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:31.553 [2024-12-11 13:21:22.954621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.785 ms 00:23:31.553 [2024-12-11 13:21:22.954633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.973316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.973354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:31.553 [2024-12-11 13:21:22.973369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.663 ms 00:23:31.553 [2024-12-11 13:21:22.973380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.991044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.991078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:31.553 [2024-12-11 13:21:22.991092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.644 ms 00:23:31.553 [2024-12-11 13:21:22.991102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:22.991947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:22.991973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:31.553 [2024-12-11 13:21:22.991985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:23:31.553 [2024-12-11 13:21:22.992001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:23.089504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:23.089612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:31.553 [2024-12-11 13:21:23.089633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.636 ms 00:23:31.553 [2024-12-11 13:21:23.089658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:23.102155] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:31.553 [2024-12-11 13:21:23.107410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:23.107448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:31.553 [2024-12-11 13:21:23.107482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.691 ms 00:23:31.553 [2024-12-11 13:21:23.107494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:23.107667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:23.107684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:31.553 [2024-12-11 13:21:23.107696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:31.553 [2024-12-11 13:21:23.107708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:23.107807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:23.107821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:31.553 [2024-12-11 13:21:23.107834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:31.553 [2024-12-11 13:21:23.107845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:23.107873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:23.107885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:31.553 [2024-12-11 13:21:23.107895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:31.553 [2024-12-11 13:21:23.107906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.553 [2024-12-11 13:21:23.107954] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:31.553 [2024-12-11 13:21:23.107974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.553 [2024-12-11 13:21:23.107986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:31.553 [2024-12-11 13:21:23.107997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:31.553 [2024-12-11 13:21:23.108007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.813 [2024-12-11 13:21:23.146940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.813 [2024-12-11 13:21:23.147021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:31.813 [2024-12-11 13:21:23.147041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.971 ms 00:23:31.813 [2024-12-11 13:21:23.147066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.813 [2024-12-11 13:21:23.147183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.813 [2024-12-11 13:21:23.147198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:31.813 [2024-12-11 13:21:23.147212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:31.814 [2024-12-11 13:21:23.147223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.814 [2024-12-11 13:21:23.148915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.200 ms, result 0 00:23:32.753  [2024-12-11T13:21:25.266Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-11T13:21:26.206Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-11T13:21:27.585Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-11T13:21:28.155Z] Copying: 101/1024 [MB] (24 MBps) [2024-12-11T13:21:29.536Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-11T13:21:30.473Z] Copying: 152/1024 [MB] (25 MBps) [2024-12-11T13:21:31.412Z] Copying: 177/1024 [MB] (25 MBps) [2024-12-11T13:21:32.351Z] Copying: 202/1024 [MB] (24 MBps) [2024-12-11T13:21:33.289Z] Copying: 227/1024 [MB] (25 MBps) [2024-12-11T13:21:34.227Z] Copying: 253/1024 [MB] (25 MBps) [2024-12-11T13:21:35.170Z] Copying: 277/1024 [MB] (24 MBps) [2024-12-11T13:21:36.143Z] Copying: 302/1024 [MB] (25 MBps) [2024-12-11T13:21:37.523Z] Copying: 328/1024 [MB] (25 MBps) [2024-12-11T13:21:38.462Z] Copying: 354/1024 [MB] (25 MBps) [2024-12-11T13:21:39.401Z] Copying: 378/1024 [MB] (24 MBps) [2024-12-11T13:21:40.339Z] Copying: 403/1024 [MB] (24 MBps) [2024-12-11T13:21:41.277Z] Copying: 428/1024 [MB] (24 MBps) [2024-12-11T13:21:42.216Z] Copying: 453/1024 [MB] (25 MBps) [2024-12-11T13:21:43.154Z] Copying: 478/1024 [MB] (24 MBps) [2024-12-11T13:21:44.534Z] Copying: 503/1024 [MB] (24 MBps) [2024-12-11T13:21:45.473Z] Copying: 528/1024 [MB] (25 MBps) [2024-12-11T13:21:46.409Z] Copying: 555/1024 [MB] (26 MBps) [2024-12-11T13:21:47.346Z] Copying: 580/1024 [MB] (25 MBps) [2024-12-11T13:21:48.284Z] Copying: 605/1024 [MB] (24 MBps) [2024-12-11T13:21:49.221Z] Copying: 630/1024 [MB] (24 MBps) [2024-12-11T13:21:50.157Z] Copying: 656/1024 [MB] (25 MBps) [2024-12-11T13:21:51.537Z] Copying: 681/1024 [MB] (25 MBps) [2024-12-11T13:21:52.524Z] Copying: 706/1024 [MB] (25 MBps) [2024-12-11T13:21:53.469Z] Copying: 731/1024 [MB] (24 MBps) [2024-12-11T13:21:54.406Z] Copying: 756/1024 [MB] (24 MBps) [2024-12-11T13:21:55.344Z] Copying: 780/1024 [MB] (24 MBps) [2024-12-11T13:21:56.280Z] Copying: 804/1024 [MB] (24 MBps) [2024-12-11T13:21:57.219Z] Copying: 829/1024 [MB] (24 MBps) [2024-12-11T13:21:58.156Z] Copying: 854/1024 [MB] (24 MBps) [2024-12-11T13:21:59.534Z] Copying: 878/1024 [MB] (24 MBps) [2024-12-11T13:22:00.103Z] Copying: 903/1024 [MB] (24 MBps) [2024-12-11T13:22:01.481Z] Copying: 927/1024 [MB] (24 MBps) [2024-12-11T13:22:02.435Z] Copying: 952/1024 [MB] (24 MBps) [2024-12-11T13:22:03.370Z] Copying: 977/1024 [MB] (24 MBps) [2024-12-11T13:22:04.307Z] Copying: 1001/1024 [MB] (24 MBps) [2024-12-11T13:22:04.307Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-11 13:22:04.047981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.048056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:12.740 [2024-12-11 13:22:04.048075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.740 [2024-12-11 13:22:04.048087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.048125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.740 [2024-12-11 13:22:04.052845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.052892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:12.740 [2024-12-11 13:22:04.052923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:24:12.740 [2024-12-11 13:22:04.052943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.055161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.055203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:12.740 [2024-12-11 13:22:04.055216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:24:12.740 [2024-12-11 13:22:04.055227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.073394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.073438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:12.740 [2024-12-11 13:22:04.073469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.177 ms 00:24:12.740 [2024-12-11 13:22:04.073481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.078486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.078518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:12.740 [2024-12-11 13:22:04.078532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.970 ms 00:24:12.740 [2024-12-11 13:22:04.078542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.116098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.116151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:12.740 [2024-12-11 13:22:04.116168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.563 ms 00:24:12.740 [2024-12-11 13:22:04.116195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.137754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.137801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:12.740 [2024-12-11 13:22:04.137818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.550 ms 00:24:12.740 [2024-12-11 13:22:04.137830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.138009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.138027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:12.740 [2024-12-11 13:22:04.138040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:24:12.740 [2024-12-11 13:22:04.138050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.174608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.174658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:12.740 [2024-12-11 13:22:04.174690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.599 ms 00:24:12.740 [2024-12-11 13:22:04.174702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.210987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.211032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:12.740 [2024-12-11 13:22:04.211048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.299 ms 00:24:12.740 [2024-12-11 13:22:04.211059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.246423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.246471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:12.740 [2024-12-11 13:22:04.246502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.355 ms 00:24:12.740 [2024-12-11 13:22:04.246513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.282654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.740 [2024-12-11 13:22:04.282701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:12.740 [2024-12-11 13:22:04.282732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.087 ms 00:24:12.740 [2024-12-11 13:22:04.282743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.740 [2024-12-11 13:22:04.282784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:12.740 [2024-12-11 13:22:04.282804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.282998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:12.740 [2024-12-11 13:22:04.283406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:12.741 [2024-12-11 13:22:04.283958] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:12.741 [2024-12-11 13:22:04.283975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ac27823-e4d5-46f8-ad78-b95cbb6bd09a 00:24:12.741 [2024-12-11 13:22:04.283987] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:12.741 [2024-12-11 13:22:04.284000] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:12.741 [2024-12-11 13:22:04.284010] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:12.741 [2024-12-11 13:22:04.284023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:12.741 [2024-12-11 13:22:04.284033] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:12.741 [2024-12-11 13:22:04.284057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:12.741 [2024-12-11 13:22:04.284068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:12.741 [2024-12-11 13:22:04.284077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:12.741 [2024-12-11 13:22:04.284086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:12.741 [2024-12-11 13:22:04.284097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.741 [2024-12-11 13:22:04.284107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:12.741 [2024-12-11 13:22:04.284128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:24:12.741 [2024-12-11 13:22:04.284139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.305338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.000 [2024-12-11 13:22:04.305379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:13.000 [2024-12-11 13:22:04.305395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.189 ms 00:24:13.000 [2024-12-11 13:22:04.305406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.306001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.000 [2024-12-11 13:22:04.306023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:13.000 [2024-12-11 13:22:04.306036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:24:13.000 [2024-12-11 13:22:04.306055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.361015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.000 [2024-12-11 13:22:04.361080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.000 [2024-12-11 13:22:04.361095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.000 [2024-12-11 13:22:04.361107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.361208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.000 [2024-12-11 13:22:04.361220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.000 [2024-12-11 13:22:04.361231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.000 [2024-12-11 13:22:04.361248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.361351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.000 [2024-12-11 13:22:04.361366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.000 [2024-12-11 13:22:04.361377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.000 [2024-12-11 13:22:04.361395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.361415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.000 [2024-12-11 13:22:04.361427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.000 [2024-12-11 13:22:04.361438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.000 [2024-12-11 13:22:04.361449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.000 [2024-12-11 13:22:04.496274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.000 [2024-12-11 13:22:04.496364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.000 [2024-12-11 13:22:04.496383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.000 [2024-12-11 13:22:04.496411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.604349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.604437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.260 [2024-12-11 13:22:04.604455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.604490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.604608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.604620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.260 [2024-12-11 13:22:04.604631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.604642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.604690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.604702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.260 [2024-12-11 13:22:04.604713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.604723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.605069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.605085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.260 [2024-12-11 13:22:04.605097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.605107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.605167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.605193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:13.260 [2024-12-11 13:22:04.605204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.605215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.605264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.605282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.260 [2024-12-11 13:22:04.605294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.605305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.605357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.260 [2024-12-11 13:22:04.605370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.260 [2024-12-11 13:22:04.605381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.260 [2024-12-11 13:22:04.605391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.260 [2024-12-11 13:22:04.605547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.428 ms, result 0 00:24:14.639 00:24:14.639 00:24:14.639 13:22:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:14.639 [2024-12-11 13:22:05.994085] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:24:14.639 [2024-12-11 13:22:05.994240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81166 ] 00:24:14.639 [2024-12-11 13:22:06.185148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.898 [2024-12-11 13:22:06.329519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.468 [2024-12-11 13:22:06.753088] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.468 [2024-12-11 13:22:06.753183] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.468 [2024-12-11 13:22:06.918962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.468 [2024-12-11 13:22:06.919023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:15.468 [2024-12-11 13:22:06.919041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:15.468 [2024-12-11 13:22:06.919053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.468 [2024-12-11 13:22:06.919109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.468 [2024-12-11 13:22:06.919137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.468 [2024-12-11 13:22:06.919149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:15.468 [2024-12-11 13:22:06.919160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.468 [2024-12-11 13:22:06.919183] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:15.468 [2024-12-11 13:22:06.920198] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:15.469 [2024-12-11 13:22:06.920221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.920233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.469 [2024-12-11 13:22:06.920245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:24:15.469 [2024-12-11 13:22:06.920255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.922682] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:15.469 [2024-12-11 13:22:06.943467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.943505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:15.469 [2024-12-11 13:22:06.943520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.820 ms 00:24:15.469 [2024-12-11 13:22:06.943532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.943617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.943630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:15.469 [2024-12-11 13:22:06.943641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:15.469 [2024-12-11 13:22:06.943652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.956257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.956289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.469 [2024-12-11 13:22:06.956304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.548 ms 00:24:15.469 [2024-12-11 13:22:06.956321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.956418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.956433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.469 [2024-12-11 13:22:06.956445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:15.469 [2024-12-11 13:22:06.956456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.956521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.956534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:15.469 [2024-12-11 13:22:06.956545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:15.469 [2024-12-11 13:22:06.956556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.956587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:15.469 [2024-12-11 13:22:06.962483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.962514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.469 [2024-12-11 13:22:06.962532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.912 ms 00:24:15.469 [2024-12-11 13:22:06.962543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.962580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.962592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:15.469 [2024-12-11 13:22:06.962603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:15.469 [2024-12-11 13:22:06.962614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.962656] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:15.469 [2024-12-11 13:22:06.962685] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:15.469 [2024-12-11 13:22:06.962726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:15.469 [2024-12-11 13:22:06.962749] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:15.469 [2024-12-11 13:22:06.962855] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:15.469 [2024-12-11 13:22:06.962869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:15.469 [2024-12-11 13:22:06.962884] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:15.469 [2024-12-11 13:22:06.962898] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:15.469 [2024-12-11 13:22:06.962910] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:15.469 [2024-12-11 13:22:06.962922] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:15.469 [2024-12-11 13:22:06.962933] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:15.469 [2024-12-11 13:22:06.962943] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:15.469 [2024-12-11 13:22:06.962958] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:15.469 [2024-12-11 13:22:06.962969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.962979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:15.469 [2024-12-11 13:22:06.962990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:24:15.469 [2024-12-11 13:22:06.963000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.963071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.469 [2024-12-11 13:22:06.963083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:15.469 [2024-12-11 13:22:06.963093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:15.469 [2024-12-11 13:22:06.963104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.469 [2024-12-11 13:22:06.963208] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:15.469 [2024-12-11 13:22:06.963224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:15.469 [2024-12-11 13:22:06.963236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:15.469 [2024-12-11 13:22:06.963267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:15.469 [2024-12-11 13:22:06.963299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.469 [2024-12-11 13:22:06.963317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:15.469 [2024-12-11 13:22:06.963330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:15.469 [2024-12-11 13:22:06.963340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.469 [2024-12-11 13:22:06.963362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:15.469 [2024-12-11 13:22:06.963373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:15.469 [2024-12-11 13:22:06.963382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:15.469 [2024-12-11 13:22:06.963401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:15.469 [2024-12-11 13:22:06.963429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:15.469 [2024-12-11 13:22:06.963457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:15.469 [2024-12-11 13:22:06.963484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:15.469 [2024-12-11 13:22:06.963512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:15.469 [2024-12-11 13:22:06.963538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.469 [2024-12-11 13:22:06.963556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:15.469 [2024-12-11 13:22:06.963565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:15.469 [2024-12-11 13:22:06.963574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.469 [2024-12-11 13:22:06.963583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:15.469 [2024-12-11 13:22:06.963592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:15.469 [2024-12-11 13:22:06.963600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:15.469 [2024-12-11 13:22:06.963618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:15.469 [2024-12-11 13:22:06.963627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963637] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:15.469 [2024-12-11 13:22:06.963648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:15.469 [2024-12-11 13:22:06.963657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.469 [2024-12-11 13:22:06.963667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.469 [2024-12-11 13:22:06.963678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:15.469 [2024-12-11 13:22:06.963688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:15.469 [2024-12-11 13:22:06.963697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:15.469 [2024-12-11 13:22:06.963706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:15.469 [2024-12-11 13:22:06.963715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:15.469 [2024-12-11 13:22:06.963724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:15.470 [2024-12-11 13:22:06.963734] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:15.470 [2024-12-11 13:22:06.963747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:15.470 [2024-12-11 13:22:06.963774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:15.470 [2024-12-11 13:22:06.963785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:15.470 [2024-12-11 13:22:06.963795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:15.470 [2024-12-11 13:22:06.963806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:15.470 [2024-12-11 13:22:06.963816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:15.470 [2024-12-11 13:22:06.963827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:15.470 [2024-12-11 13:22:06.963837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:15.470 [2024-12-11 13:22:06.963848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:15.470 [2024-12-11 13:22:06.963858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:15.470 [2024-12-11 13:22:06.963909] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:15.470 [2024-12-11 13:22:06.963920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:15.470 [2024-12-11 13:22:06.963942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:15.470 [2024-12-11 13:22:06.963952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:15.470 [2024-12-11 13:22:06.963962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:15.470 [2024-12-11 13:22:06.963973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.470 [2024-12-11 13:22:06.963984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:15.470 [2024-12-11 13:22:06.963994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:24:15.470 [2024-12-11 13:22:06.964005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.470 [2024-12-11 13:22:07.012841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.470 [2024-12-11 13:22:07.012896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.470 [2024-12-11 13:22:07.012914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.860 ms 00:24:15.470 [2024-12-11 13:22:07.012931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.470 [2024-12-11 13:22:07.013045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.470 [2024-12-11 13:22:07.013058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:15.470 [2024-12-11 13:22:07.013069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:15.470 [2024-12-11 13:22:07.013079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.075594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.075655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:15.730 [2024-12-11 13:22:07.075673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.475 ms 00:24:15.730 [2024-12-11 13:22:07.075685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.075753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.075770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:15.730 [2024-12-11 13:22:07.075783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:15.730 [2024-12-11 13:22:07.075793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.076593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.076611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:15.730 [2024-12-11 13:22:07.076623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:24:15.730 [2024-12-11 13:22:07.076635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.076780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.076796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:15.730 [2024-12-11 13:22:07.076811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:15.730 [2024-12-11 13:22:07.076822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.099604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.099658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:15.730 [2024-12-11 13:22:07.099674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.793 ms 00:24:15.730 [2024-12-11 13:22:07.099684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.119188] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:15.730 [2024-12-11 13:22:07.119221] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:15.730 [2024-12-11 13:22:07.119237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.119249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:15.730 [2024-12-11 13:22:07.119261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 00:24:15.730 [2024-12-11 13:22:07.119272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.148214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.148269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:15.730 [2024-12-11 13:22:07.148300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.944 ms 00:24:15.730 [2024-12-11 13:22:07.148312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.166163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.166196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:15.730 [2024-12-11 13:22:07.166210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:24:15.730 [2024-12-11 13:22:07.166221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.183702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.183731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:15.730 [2024-12-11 13:22:07.183743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.468 ms 00:24:15.730 [2024-12-11 13:22:07.183769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.184550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.184573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:15.730 [2024-12-11 13:22:07.184590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:24:15.730 [2024-12-11 13:22:07.184600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.280141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.730 [2024-12-11 13:22:07.280216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:15.730 [2024-12-11 13:22:07.280244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.669 ms 00:24:15.730 [2024-12-11 13:22:07.280255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.730 [2024-12-11 13:22:07.292726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:15.990 [2024-12-11 13:22:07.297955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.297988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:15.990 [2024-12-11 13:22:07.298006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.642 ms 00:24:15.990 [2024-12-11 13:22:07.298018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.298190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.298205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:15.990 [2024-12-11 13:22:07.298218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:15.990 [2024-12-11 13:22:07.298234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.298328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.298343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:15.990 [2024-12-11 13:22:07.298355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:15.990 [2024-12-11 13:22:07.298366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.298396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.298408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:15.990 [2024-12-11 13:22:07.298419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:15.990 [2024-12-11 13:22:07.298430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.298478] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:15.990 [2024-12-11 13:22:07.298492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.298503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:15.990 [2024-12-11 13:22:07.298514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:15.990 [2024-12-11 13:22:07.298525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.337297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.337346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:15.990 [2024-12-11 13:22:07.337371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.811 ms 00:24:15.990 [2024-12-11 13:22:07.337382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.337475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.990 [2024-12-11 13:22:07.337488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:15.990 [2024-12-11 13:22:07.337499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:15.990 [2024-12-11 13:22:07.337510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.990 [2024-12-11 13:22:07.339008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.219 ms, result 0 00:24:17.368  [2024-12-11T13:22:09.872Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-11T13:22:10.808Z] Copying: 54/1024 [MB] (28 MBps) [2024-12-11T13:22:11.744Z] Copying: 82/1024 [MB] (28 MBps) [2024-12-11T13:22:12.679Z] Copying: 110/1024 [MB] (28 MBps) [2024-12-11T13:22:13.614Z] Copying: 137/1024 [MB] (27 MBps) [2024-12-11T13:22:14.990Z] Copying: 164/1024 [MB] (26 MBps) [2024-12-11T13:22:15.558Z] Copying: 190/1024 [MB] (26 MBps) [2024-12-11T13:22:16.934Z] Copying: 217/1024 [MB] (27 MBps) [2024-12-11T13:22:17.871Z] Copying: 244/1024 [MB] (26 MBps) [2024-12-11T13:22:18.809Z] Copying: 271/1024 [MB] (27 MBps) [2024-12-11T13:22:19.744Z] Copying: 299/1024 [MB] (27 MBps) [2024-12-11T13:22:20.682Z] Copying: 327/1024 [MB] (28 MBps) [2024-12-11T13:22:21.619Z] Copying: 356/1024 [MB] (28 MBps) [2024-12-11T13:22:22.559Z] Copying: 383/1024 [MB] (27 MBps) [2024-12-11T13:22:23.939Z] Copying: 409/1024 [MB] (26 MBps) [2024-12-11T13:22:24.877Z] Copying: 437/1024 [MB] (27 MBps) [2024-12-11T13:22:25.813Z] Copying: 463/1024 [MB] (26 MBps) [2024-12-11T13:22:26.805Z] Copying: 489/1024 [MB] (26 MBps) [2024-12-11T13:22:27.741Z] Copying: 516/1024 [MB] (26 MBps) [2024-12-11T13:22:28.678Z] Copying: 542/1024 [MB] (26 MBps) [2024-12-11T13:22:29.615Z] Copying: 569/1024 [MB] (26 MBps) [2024-12-11T13:22:30.552Z] Copying: 595/1024 [MB] (26 MBps) [2024-12-11T13:22:31.929Z] Copying: 622/1024 [MB] (26 MBps) [2024-12-11T13:22:32.866Z] Copying: 650/1024 [MB] (28 MBps) [2024-12-11T13:22:33.804Z] Copying: 678/1024 [MB] (27 MBps) [2024-12-11T13:22:34.740Z] Copying: 706/1024 [MB] (27 MBps) [2024-12-11T13:22:35.677Z] Copying: 732/1024 [MB] (26 MBps) [2024-12-11T13:22:36.641Z] Copying: 759/1024 [MB] (26 MBps) [2024-12-11T13:22:37.578Z] Copying: 786/1024 [MB] (26 MBps) [2024-12-11T13:22:38.956Z] Copying: 813/1024 [MB] (26 MBps) [2024-12-11T13:22:39.524Z] Copying: 840/1024 [MB] (26 MBps) [2024-12-11T13:22:40.899Z] Copying: 866/1024 [MB] (26 MBps) [2024-12-11T13:22:41.836Z] Copying: 892/1024 [MB] (26 MBps) [2024-12-11T13:22:42.774Z] Copying: 919/1024 [MB] (26 MBps) [2024-12-11T13:22:43.712Z] Copying: 945/1024 [MB] (26 MBps) [2024-12-11T13:22:44.649Z] Copying: 971/1024 [MB] (26 MBps) [2024-12-11T13:22:45.587Z] Copying: 997/1024 [MB] (25 MBps) [2024-12-11T13:22:45.587Z] Copying: 1022/1024 [MB] (25 MBps) [2024-12-11T13:22:46.967Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-11 13:22:46.831951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.832340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.399 [2024-12-11 13:22:46.832488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:55.399 [2024-12-11 13:22:46.832542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.832637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.399 [2024-12-11 13:22:46.837917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.838076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.399 [2024-12-11 13:22:46.838174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:24:55.399 [2024-12-11 13:22:46.838214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.838480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.838518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.399 [2024-12-11 13:22:46.838551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:24:55.399 [2024-12-11 13:22:46.838646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.841627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.841755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.399 [2024-12-11 13:22:46.841836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:24:55.399 [2024-12-11 13:22:46.841881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.847209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.847345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.399 [2024-12-11 13:22:46.847471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:24:55.399 [2024-12-11 13:22:46.847509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.888513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.888741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.399 [2024-12-11 13:22:46.888832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.963 ms 00:24:55.399 [2024-12-11 13:22:46.888868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.910250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.910427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.399 [2024-12-11 13:22:46.910511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.344 ms 00:24:55.399 [2024-12-11 13:22:46.910548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.910786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.910938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.399 [2024-12-11 13:22:46.911010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:55.399 [2024-12-11 13:22:46.911042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.399 [2024-12-11 13:22:46.948634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.399 [2024-12-11 13:22:46.948829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:55.399 [2024-12-11 13:22:46.948934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.614 ms 00:24:55.399 [2024-12-11 13:22:46.948950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.659 [2024-12-11 13:22:46.986895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.659 [2024-12-11 13:22:46.986950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:55.659 [2024-12-11 13:22:46.986968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.898 ms 00:24:55.659 [2024-12-11 13:22:46.986980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.659 [2024-12-11 13:22:47.024409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.659 [2024-12-11 13:22:47.024465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.659 [2024-12-11 13:22:47.024500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.431 ms 00:24:55.659 [2024-12-11 13:22:47.024511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.659 [2024-12-11 13:22:47.061259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.659 [2024-12-11 13:22:47.061319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.659 [2024-12-11 13:22:47.061337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.697 ms 00:24:55.659 [2024-12-11 13:22:47.061349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.659 [2024-12-11 13:22:47.061399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.659 [2024-12-11 13:22:47.061427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.659 [2024-12-11 13:22:47.061538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.061997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.660 [2024-12-11 13:22:47.062534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.661 [2024-12-11 13:22:47.062545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.661 [2024-12-11 13:22:47.062556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.661 [2024-12-11 13:22:47.062568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.661 [2024-12-11 13:22:47.062587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.661 [2024-12-11 13:22:47.062597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ac27823-e4d5-46f8-ad78-b95cbb6bd09a 00:24:55.661 [2024-12-11 13:22:47.062609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:55.661 [2024-12-11 13:22:47.062619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:55.661 [2024-12-11 13:22:47.062630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:55.661 [2024-12-11 13:22:47.062641] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:55.661 [2024-12-11 13:22:47.062666] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.661 [2024-12-11 13:22:47.062676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.661 [2024-12-11 13:22:47.062686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.661 [2024-12-11 13:22:47.062696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.661 [2024-12-11 13:22:47.062706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.661 [2024-12-11 13:22:47.062717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.661 [2024-12-11 13:22:47.062728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.661 [2024-12-11 13:22:47.062739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:24:55.661 [2024-12-11 13:22:47.062754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.661 [2024-12-11 13:22:47.084694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.661 [2024-12-11 13:22:47.084742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.661 [2024-12-11 13:22:47.084757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.928 ms 00:24:55.661 [2024-12-11 13:22:47.084768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.661 [2024-12-11 13:22:47.085438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.661 [2024-12-11 13:22:47.085457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.661 [2024-12-11 13:22:47.085477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:24:55.661 [2024-12-11 13:22:47.085488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.661 [2024-12-11 13:22:47.141140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.661 [2024-12-11 13:22:47.141200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.661 [2024-12-11 13:22:47.141233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.661 [2024-12-11 13:22:47.141245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.661 [2024-12-11 13:22:47.141335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.661 [2024-12-11 13:22:47.141348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.661 [2024-12-11 13:22:47.141372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.661 [2024-12-11 13:22:47.141383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.661 [2024-12-11 13:22:47.141476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.661 [2024-12-11 13:22:47.141490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.661 [2024-12-11 13:22:47.141502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.661 [2024-12-11 13:22:47.141513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.661 [2024-12-11 13:22:47.141541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.661 [2024-12-11 13:22:47.141553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.661 [2024-12-11 13:22:47.141564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.661 [2024-12-11 13:22:47.141584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.920 [2024-12-11 13:22:47.279385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.279456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.921 [2024-12-11 13:22:47.279474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.279502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.386408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.386490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.921 [2024-12-11 13:22:47.386515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.386526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.386650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.386663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.921 [2024-12-11 13:22:47.386675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.386686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.386735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.386748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.921 [2024-12-11 13:22:47.386760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.386771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.386908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.386921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.921 [2024-12-11 13:22:47.386932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.386943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.386981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.386994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:55.921 [2024-12-11 13:22:47.387005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.387016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.387067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.387081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.921 [2024-12-11 13:22:47.387091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.387102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.387171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.921 [2024-12-11 13:22:47.387185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.921 [2024-12-11 13:22:47.387196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.921 [2024-12-11 13:22:47.387207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.921 [2024-12-11 13:22:47.387360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.286 ms, result 0 00:24:57.300 00:24:57.300 00:24:57.300 13:22:48 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:59.206 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:59.206 13:22:50 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:59.206 [2024-12-11 13:22:50.391525] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:24:59.206 [2024-12-11 13:22:50.391663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81607 ] 00:24:59.206 [2024-12-11 13:22:50.574968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.206 [2024-12-11 13:22:50.720658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.775 [2024-12-11 13:22:51.145211] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.775 [2024-12-11 13:22:51.145287] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.775 [2024-12-11 13:22:51.310363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.775 [2024-12-11 13:22:51.310430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:59.775 [2024-12-11 13:22:51.310447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:59.775 [2024-12-11 13:22:51.310458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.775 [2024-12-11 13:22:51.310509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.775 [2024-12-11 13:22:51.310526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.775 [2024-12-11 13:22:51.310538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:59.775 [2024-12-11 13:22:51.310548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.775 [2024-12-11 13:22:51.310570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:59.775 [2024-12-11 13:22:51.311484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:59.775 [2024-12-11 13:22:51.311514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.775 [2024-12-11 13:22:51.311526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.775 [2024-12-11 13:22:51.311538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:24:59.775 [2024-12-11 13:22:51.311548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.775 [2024-12-11 13:22:51.314021] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:59.775 [2024-12-11 13:22:51.335244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.775 [2024-12-11 13:22:51.335282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:59.775 [2024-12-11 13:22:51.335299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.256 ms 00:24:59.775 [2024-12-11 13:22:51.335311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.775 [2024-12-11 13:22:51.335392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.775 [2024-12-11 13:22:51.335406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:59.775 [2024-12-11 13:22:51.335417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:59.775 [2024-12-11 13:22:51.335428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.348019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.348051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.036 [2024-12-11 13:22:51.348066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.534 ms 00:25:00.036 [2024-12-11 13:22:51.348083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.348191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.348207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.036 [2024-12-11 13:22:51.348219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:00.036 [2024-12-11 13:22:51.348230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.348295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.348307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:00.036 [2024-12-11 13:22:51.348319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:00.036 [2024-12-11 13:22:51.348329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.348364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:00.036 [2024-12-11 13:22:51.354059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.354092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.036 [2024-12-11 13:22:51.354109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.713 ms 00:25:00.036 [2024-12-11 13:22:51.354133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.354173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.354185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:00.036 [2024-12-11 13:22:51.354197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:00.036 [2024-12-11 13:22:51.354208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.354250] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:00.036 [2024-12-11 13:22:51.354279] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:00.036 [2024-12-11 13:22:51.354320] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:00.036 [2024-12-11 13:22:51.354343] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:00.036 [2024-12-11 13:22:51.354440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:00.036 [2024-12-11 13:22:51.354453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:00.036 [2024-12-11 13:22:51.354468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:00.036 [2024-12-11 13:22:51.354482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:00.036 [2024-12-11 13:22:51.354494] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:00.036 [2024-12-11 13:22:51.354507] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:00.036 [2024-12-11 13:22:51.354518] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:00.036 [2024-12-11 13:22:51.354529] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:00.036 [2024-12-11 13:22:51.354543] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:00.036 [2024-12-11 13:22:51.354554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.354565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:00.036 [2024-12-11 13:22:51.354576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:25:00.036 [2024-12-11 13:22:51.354587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.354661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.036 [2024-12-11 13:22:51.354673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:00.036 [2024-12-11 13:22:51.354685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:00.036 [2024-12-11 13:22:51.354695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.036 [2024-12-11 13:22:51.354792] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:00.036 [2024-12-11 13:22:51.354806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:00.036 [2024-12-11 13:22:51.354818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.036 [2024-12-11 13:22:51.354829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.036 [2024-12-11 13:22:51.354840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:00.036 [2024-12-11 13:22:51.354850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:00.036 [2024-12-11 13:22:51.354861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:00.036 [2024-12-11 13:22:51.354871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:00.036 [2024-12-11 13:22:51.354882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:00.036 [2024-12-11 13:22:51.354892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.036 [2024-12-11 13:22:51.354901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:00.036 [2024-12-11 13:22:51.354911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:00.036 [2024-12-11 13:22:51.354921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.036 [2024-12-11 13:22:51.354944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:00.036 [2024-12-11 13:22:51.354955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:00.036 [2024-12-11 13:22:51.354965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.036 [2024-12-11 13:22:51.354975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:00.036 [2024-12-11 13:22:51.354985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:00.036 [2024-12-11 13:22:51.354995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.036 [2024-12-11 13:22:51.355005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:00.036 [2024-12-11 13:22:51.355015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:00.036 [2024-12-11 13:22:51.355024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.037 [2024-12-11 13:22:51.355034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:00.037 [2024-12-11 13:22:51.355044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.037 [2024-12-11 13:22:51.355063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:00.037 [2024-12-11 13:22:51.355072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.037 [2024-12-11 13:22:51.355090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:00.037 [2024-12-11 13:22:51.355100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.037 [2024-12-11 13:22:51.355130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:00.037 [2024-12-11 13:22:51.355140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.037 [2024-12-11 13:22:51.355159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:00.037 [2024-12-11 13:22:51.355169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:00.037 [2024-12-11 13:22:51.355179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.037 [2024-12-11 13:22:51.355188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:00.037 [2024-12-11 13:22:51.355198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:00.037 [2024-12-11 13:22:51.355208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:00.037 [2024-12-11 13:22:51.355227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:00.037 [2024-12-11 13:22:51.355238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355247] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:00.037 [2024-12-11 13:22:51.355259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:00.037 [2024-12-11 13:22:51.355270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.037 [2024-12-11 13:22:51.355281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.037 [2024-12-11 13:22:51.355291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:00.037 [2024-12-11 13:22:51.355301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:00.037 [2024-12-11 13:22:51.355311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:00.037 [2024-12-11 13:22:51.355320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:00.037 [2024-12-11 13:22:51.355330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:00.037 [2024-12-11 13:22:51.355340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:00.037 [2024-12-11 13:22:51.355351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:00.037 [2024-12-11 13:22:51.355364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:00.037 [2024-12-11 13:22:51.355391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:00.037 [2024-12-11 13:22:51.355401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:00.037 [2024-12-11 13:22:51.355412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:00.037 [2024-12-11 13:22:51.355423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:00.037 [2024-12-11 13:22:51.355434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:00.037 [2024-12-11 13:22:51.355444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:00.037 [2024-12-11 13:22:51.355455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:00.037 [2024-12-11 13:22:51.355465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:00.037 [2024-12-11 13:22:51.355476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:00.037 [2024-12-11 13:22:51.355528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:00.037 [2024-12-11 13:22:51.355540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.037 [2024-12-11 13:22:51.355563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:00.037 [2024-12-11 13:22:51.355573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:00.037 [2024-12-11 13:22:51.355584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:00.037 [2024-12-11 13:22:51.355595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.355607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:00.037 [2024-12-11 13:22:51.355618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:25:00.037 [2024-12-11 13:22:51.355630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.406153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.406203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.037 [2024-12-11 13:22:51.406238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.547 ms 00:25:00.037 [2024-12-11 13:22:51.406254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.406357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.406369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:00.037 [2024-12-11 13:22:51.406380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:00.037 [2024-12-11 13:22:51.406391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.472518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.472566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.037 [2024-12-11 13:22:51.472598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.125 ms 00:25:00.037 [2024-12-11 13:22:51.472609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.472667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.472684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.037 [2024-12-11 13:22:51.472696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.037 [2024-12-11 13:22:51.472706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.473559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.473581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.037 [2024-12-11 13:22:51.473593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:25:00.037 [2024-12-11 13:22:51.473604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.473748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.473762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.037 [2024-12-11 13:22:51.473778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:25:00.037 [2024-12-11 13:22:51.473789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.496787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.496832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.037 [2024-12-11 13:22:51.496864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.011 ms 00:25:00.037 [2024-12-11 13:22:51.496876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.516678] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:00.037 [2024-12-11 13:22:51.516714] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:00.037 [2024-12-11 13:22:51.516747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.516758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:00.037 [2024-12-11 13:22:51.516771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.765 ms 00:25:00.037 [2024-12-11 13:22:51.516782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.545788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.545843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:00.037 [2024-12-11 13:22:51.545859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.005 ms 00:25:00.037 [2024-12-11 13:22:51.545872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.563454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.563489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:00.037 [2024-12-11 13:22:51.563502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:25:00.037 [2024-12-11 13:22:51.563529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.037 [2024-12-11 13:22:51.581238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.037 [2024-12-11 13:22:51.581271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:00.037 [2024-12-11 13:22:51.581284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.696 ms 00:25:00.037 [2024-12-11 13:22:51.581294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.038 [2024-12-11 13:22:51.582086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.038 [2024-12-11 13:22:51.582109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.038 [2024-12-11 13:22:51.582160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:25:00.038 [2024-12-11 13:22:51.582170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.678460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.678559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:00.297 [2024-12-11 13:22:51.678585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.420 ms 00:25:00.297 [2024-12-11 13:22:51.678597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.689810] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.297 [2024-12-11 13:22:51.694271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.694301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.297 [2024-12-11 13:22:51.694335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.625 ms 00:25:00.297 [2024-12-11 13:22:51.694346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.694482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.694497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:00.297 [2024-12-11 13:22:51.694509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.297 [2024-12-11 13:22:51.694525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.694615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.694628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.297 [2024-12-11 13:22:51.694639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:00.297 [2024-12-11 13:22:51.694650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.694674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.694685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.297 [2024-12-11 13:22:51.694696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:00.297 [2024-12-11 13:22:51.694707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.694753] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:00.297 [2024-12-11 13:22:51.694767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.694777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:00.297 [2024-12-11 13:22:51.694789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:00.297 [2024-12-11 13:22:51.694799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.732158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.732197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.297 [2024-12-11 13:22:51.732236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.397 ms 00:25:00.297 [2024-12-11 13:22:51.732247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.732329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.297 [2024-12-11 13:22:51.732342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.297 [2024-12-11 13:22:51.732354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:00.297 [2024-12-11 13:22:51.732365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.297 [2024-12-11 13:22:51.733882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 423.679 ms, result 0 00:25:01.237  [2024-12-11T13:22:53.748Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-11T13:22:55.126Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-11T13:22:56.062Z] Copying: 77/1024 [MB] (25 MBps) [2024-12-11T13:22:57.000Z] Copying: 103/1024 [MB] (25 MBps) [2024-12-11T13:22:57.937Z] Copying: 128/1024 [MB] (25 MBps) [2024-12-11T13:22:58.873Z] Copying: 153/1024 [MB] (25 MBps) [2024-12-11T13:22:59.811Z] Copying: 178/1024 [MB] (24 MBps) [2024-12-11T13:23:00.748Z] Copying: 203/1024 [MB] (24 MBps) [2024-12-11T13:23:02.126Z] Copying: 228/1024 [MB] (25 MBps) [2024-12-11T13:23:03.064Z] Copying: 253/1024 [MB] (25 MBps) [2024-12-11T13:23:04.000Z] Copying: 278/1024 [MB] (24 MBps) [2024-12-11T13:23:04.937Z] Copying: 303/1024 [MB] (24 MBps) [2024-12-11T13:23:05.873Z] Copying: 329/1024 [MB] (25 MBps) [2024-12-11T13:23:06.810Z] Copying: 355/1024 [MB] (25 MBps) [2024-12-11T13:23:07.747Z] Copying: 380/1024 [MB] (25 MBps) [2024-12-11T13:23:08.731Z] Copying: 405/1024 [MB] (24 MBps) [2024-12-11T13:23:10.112Z] Copying: 430/1024 [MB] (24 MBps) [2024-12-11T13:23:11.050Z] Copying: 455/1024 [MB] (24 MBps) [2024-12-11T13:23:11.989Z] Copying: 479/1024 [MB] (24 MBps) [2024-12-11T13:23:12.927Z] Copying: 503/1024 [MB] (24 MBps) [2024-12-11T13:23:13.865Z] Copying: 528/1024 [MB] (24 MBps) [2024-12-11T13:23:14.803Z] Copying: 552/1024 [MB] (24 MBps) [2024-12-11T13:23:15.738Z] Copying: 577/1024 [MB] (24 MBps) [2024-12-11T13:23:17.116Z] Copying: 602/1024 [MB] (24 MBps) [2024-12-11T13:23:18.055Z] Copying: 626/1024 [MB] (24 MBps) [2024-12-11T13:23:18.992Z] Copying: 651/1024 [MB] (24 MBps) [2024-12-11T13:23:19.930Z] Copying: 676/1024 [MB] (25 MBps) [2024-12-11T13:23:20.868Z] Copying: 700/1024 [MB] (24 MBps) [2024-12-11T13:23:21.806Z] Copying: 725/1024 [MB] (24 MBps) [2024-12-11T13:23:22.744Z] Copying: 749/1024 [MB] (24 MBps) [2024-12-11T13:23:24.122Z] Copying: 774/1024 [MB] (24 MBps) [2024-12-11T13:23:24.710Z] Copying: 799/1024 [MB] (24 MBps) [2024-12-11T13:23:26.086Z] Copying: 823/1024 [MB] (24 MBps) [2024-12-11T13:23:27.030Z] Copying: 849/1024 [MB] (25 MBps) [2024-12-11T13:23:27.968Z] Copying: 873/1024 [MB] (24 MBps) [2024-12-11T13:23:28.905Z] Copying: 898/1024 [MB] (24 MBps) [2024-12-11T13:23:29.842Z] Copying: 922/1024 [MB] (24 MBps) [2024-12-11T13:23:30.778Z] Copying: 947/1024 [MB] (24 MBps) [2024-12-11T13:23:31.715Z] Copying: 972/1024 [MB] (25 MBps) [2024-12-11T13:23:33.093Z] Copying: 997/1024 [MB] (24 MBps) [2024-12-11T13:23:33.662Z] Copying: 1022/1024 [MB] (24 MBps) [2024-12-11T13:23:33.662Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-11 13:23:33.520493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.520578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.094 [2024-12-11 13:23:33.520607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:42.094 [2024-12-11 13:23:33.520619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.094 [2024-12-11 13:23:33.522314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.094 [2024-12-11 13:23:33.529666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.529706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.094 [2024-12-11 13:23:33.529738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.318 ms 00:25:42.094 [2024-12-11 13:23:33.529751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.094 [2024-12-11 13:23:33.540225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.540278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.094 [2024-12-11 13:23:33.540293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.644 ms 00:25:42.094 [2024-12-11 13:23:33.540329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.094 [2024-12-11 13:23:33.564923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.564980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.094 [2024-12-11 13:23:33.564995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.614 ms 00:25:42.094 [2024-12-11 13:23:33.565006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.094 [2024-12-11 13:23:33.570189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.570244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.094 [2024-12-11 13:23:33.570257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.158 ms 00:25:42.094 [2024-12-11 13:23:33.570276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.094 [2024-12-11 13:23:33.609044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.609086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.094 [2024-12-11 13:23:33.609101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.793 ms 00:25:42.094 [2024-12-11 13:23:33.609138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.094 [2024-12-11 13:23:33.629829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.094 [2024-12-11 13:23:33.629866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.094 [2024-12-11 13:23:33.629896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.673 ms 00:25:42.094 [2024-12-11 13:23:33.629908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.353 [2024-12-11 13:23:33.755282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.353 [2024-12-11 13:23:33.755322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.353 [2024-12-11 13:23:33.755338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.533 ms 00:25:42.353 [2024-12-11 13:23:33.755349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.353 [2024-12-11 13:23:33.792254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.353 [2024-12-11 13:23:33.792292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:42.353 [2024-12-11 13:23:33.792305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.946 ms 00:25:42.353 [2024-12-11 13:23:33.792316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.353 [2024-12-11 13:23:33.827236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.353 [2024-12-11 13:23:33.827272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:42.353 [2024-12-11 13:23:33.827285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.924 ms 00:25:42.353 [2024-12-11 13:23:33.827295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.353 [2024-12-11 13:23:33.861745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.353 [2024-12-11 13:23:33.861782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.353 [2024-12-11 13:23:33.861795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.454 ms 00:25:42.353 [2024-12-11 13:23:33.861806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.353 [2024-12-11 13:23:33.895132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.353 [2024-12-11 13:23:33.895168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.353 [2024-12-11 13:23:33.895197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.286 ms 00:25:42.353 [2024-12-11 13:23:33.895207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.353 [2024-12-11 13:23:33.895243] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.353 [2024-12-11 13:23:33.895261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114944 / 261120 wr_cnt: 1 state: open 00:25:42.353 [2024-12-11 13:23:33.895275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.353 [2024-12-11 13:23:33.895468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.895989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.354 [2024-12-11 13:23:33.896394] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.354 [2024-12-11 13:23:33.896405] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ac27823-e4d5-46f8-ad78-b95cbb6bd09a 00:25:42.354 [2024-12-11 13:23:33.896417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114944 00:25:42.354 [2024-12-11 13:23:33.896427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115904 00:25:42.354 [2024-12-11 13:23:33.896437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114944 00:25:42.354 [2024-12-11 13:23:33.896448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:25:42.354 [2024-12-11 13:23:33.896476] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.354 [2024-12-11 13:23:33.896487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.354 [2024-12-11 13:23:33.896497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.354 [2024-12-11 13:23:33.896507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.354 [2024-12-11 13:23:33.896516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.354 [2024-12-11 13:23:33.896526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.354 [2024-12-11 13:23:33.896537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.354 [2024-12-11 13:23:33.896547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:25:42.354 [2024-12-11 13:23:33.896557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.354 [2024-12-11 13:23:33.917635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.354 [2024-12-11 13:23:33.917668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.354 [2024-12-11 13:23:33.917687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.077 ms 00:25:42.354 [2024-12-11 13:23:33.917698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.354 [2024-12-11 13:23:33.918249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.354 [2024-12-11 13:23:33.918268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.354 [2024-12-11 13:23:33.918279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:25:42.354 [2024-12-11 13:23:33.918290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.613 [2024-12-11 13:23:33.971900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.613 [2024-12-11 13:23:33.971935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.613 [2024-12-11 13:23:33.971965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.613 [2024-12-11 13:23:33.971975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.613 [2024-12-11 13:23:33.972037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.614 [2024-12-11 13:23:33.972049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.614 [2024-12-11 13:23:33.972059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.614 [2024-12-11 13:23:33.972070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.614 [2024-12-11 13:23:33.972167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.614 [2024-12-11 13:23:33.972181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.614 [2024-12-11 13:23:33.972197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.614 [2024-12-11 13:23:33.972207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.614 [2024-12-11 13:23:33.972225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.614 [2024-12-11 13:23:33.972237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.614 [2024-12-11 13:23:33.972247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.614 [2024-12-11 13:23:33.972257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.614 [2024-12-11 13:23:34.101083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.614 [2024-12-11 13:23:34.101171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.614 [2024-12-11 13:23:34.101190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.614 [2024-12-11 13:23:34.101217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.205789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.205872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.873 [2024-12-11 13:23:34.205889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.205917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.206054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.873 [2024-12-11 13:23:34.206066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.206082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.206167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.873 [2024-12-11 13:23:34.206178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.206189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.206324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.873 [2024-12-11 13:23:34.206336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.206351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.206405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:42.873 [2024-12-11 13:23:34.206416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.206427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.206486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.873 [2024-12-11 13:23:34.206496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.206507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.873 [2024-12-11 13:23:34.206577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.873 [2024-12-11 13:23:34.206588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.873 [2024-12-11 13:23:34.206598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.873 [2024-12-11 13:23:34.206777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 688.789 ms, result 0 00:25:44.779 00:25:44.779 00:25:44.779 13:23:35 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:44.779 [2024-12-11 13:23:35.963415] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:25:44.779 [2024-12-11 13:23:35.963576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82060 ] 00:25:44.779 [2024-12-11 13:23:36.148231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.779 [2024-12-11 13:23:36.289695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.348 [2024-12-11 13:23:36.712317] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:45.348 [2024-12-11 13:23:36.712406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:45.348 [2024-12-11 13:23:36.878289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.348 [2024-12-11 13:23:36.878352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:45.348 [2024-12-11 13:23:36.878369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:45.348 [2024-12-11 13:23:36.878382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.348 [2024-12-11 13:23:36.878434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.348 [2024-12-11 13:23:36.878451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.348 [2024-12-11 13:23:36.878463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:45.348 [2024-12-11 13:23:36.878474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.348 [2024-12-11 13:23:36.878497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:45.348 [2024-12-11 13:23:36.879446] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:45.348 [2024-12-11 13:23:36.879477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.348 [2024-12-11 13:23:36.879489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.348 [2024-12-11 13:23:36.879502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:25:45.348 [2024-12-11 13:23:36.879512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.348 [2024-12-11 13:23:36.881985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:45.348 [2024-12-11 13:23:36.902069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.348 [2024-12-11 13:23:36.902137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:45.348 [2024-12-11 13:23:36.902153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.117 ms 00:25:45.348 [2024-12-11 13:23:36.902181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.348 [2024-12-11 13:23:36.902260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.348 [2024-12-11 13:23:36.902274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:45.348 [2024-12-11 13:23:36.902287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:45.348 [2024-12-11 13:23:36.902297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.914929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.914966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.609 [2024-12-11 13:23:36.914980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.576 ms 00:25:45.609 [2024-12-11 13:23:36.914998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.915092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.915107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.609 [2024-12-11 13:23:36.915130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:45.609 [2024-12-11 13:23:36.915141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.915203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.915217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:45.609 [2024-12-11 13:23:36.915229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:45.609 [2024-12-11 13:23:36.915239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.915273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:45.609 [2024-12-11 13:23:36.920925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.920961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.609 [2024-12-11 13:23:36.920979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:25:45.609 [2024-12-11 13:23:36.920990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.921028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.921040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:45.609 [2024-12-11 13:23:36.921051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:45.609 [2024-12-11 13:23:36.921062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.921102] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:45.609 [2024-12-11 13:23:36.921143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:45.609 [2024-12-11 13:23:36.921182] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:45.609 [2024-12-11 13:23:36.921205] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:45.609 [2024-12-11 13:23:36.921297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:45.609 [2024-12-11 13:23:36.921326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:45.609 [2024-12-11 13:23:36.921341] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:45.609 [2024-12-11 13:23:36.921354] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:45.609 [2024-12-11 13:23:36.921367] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:45.609 [2024-12-11 13:23:36.921379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:45.609 [2024-12-11 13:23:36.921390] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:45.609 [2024-12-11 13:23:36.921401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:45.609 [2024-12-11 13:23:36.921417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:45.609 [2024-12-11 13:23:36.921428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.921439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:45.609 [2024-12-11 13:23:36.921451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:25:45.609 [2024-12-11 13:23:36.921462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.921544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.609 [2024-12-11 13:23:36.921556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:45.609 [2024-12-11 13:23:36.921566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:45.609 [2024-12-11 13:23:36.921577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.609 [2024-12-11 13:23:36.921672] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:45.609 [2024-12-11 13:23:36.921687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:45.609 [2024-12-11 13:23:36.921699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:45.609 [2024-12-11 13:23:36.921710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.609 [2024-12-11 13:23:36.921722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:45.610 [2024-12-11 13:23:36.921732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:45.610 [2024-12-11 13:23:36.921752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:45.610 [2024-12-11 13:23:36.921761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:45.610 [2024-12-11 13:23:36.921780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:45.610 [2024-12-11 13:23:36.921792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:45.610 [2024-12-11 13:23:36.921801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:45.610 [2024-12-11 13:23:36.921824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:45.610 [2024-12-11 13:23:36.921834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:45.610 [2024-12-11 13:23:36.921844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:45.610 [2024-12-11 13:23:36.921864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:45.610 [2024-12-11 13:23:36.921873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:45.610 [2024-12-11 13:23:36.921893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.610 [2024-12-11 13:23:36.921913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:45.610 [2024-12-11 13:23:36.921922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.610 [2024-12-11 13:23:36.921942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:45.610 [2024-12-11 13:23:36.921951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.610 [2024-12-11 13:23:36.921970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:45.610 [2024-12-11 13:23:36.921980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:45.610 [2024-12-11 13:23:36.921989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.610 [2024-12-11 13:23:36.921999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:45.610 [2024-12-11 13:23:36.922009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:45.610 [2024-12-11 13:23:36.922018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:45.610 [2024-12-11 13:23:36.922027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:45.610 [2024-12-11 13:23:36.922036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:45.610 [2024-12-11 13:23:36.922045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:45.610 [2024-12-11 13:23:36.922055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:45.610 [2024-12-11 13:23:36.922064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:45.610 [2024-12-11 13:23:36.922074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.610 [2024-12-11 13:23:36.922083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:45.610 [2024-12-11 13:23:36.922092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:45.610 [2024-12-11 13:23:36.922101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.610 [2024-12-11 13:23:36.922111] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:45.610 [2024-12-11 13:23:36.922140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:45.610 [2024-12-11 13:23:36.922151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:45.610 [2024-12-11 13:23:36.922162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.610 [2024-12-11 13:23:36.922172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:45.610 [2024-12-11 13:23:36.922183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:45.610 [2024-12-11 13:23:36.922193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:45.610 [2024-12-11 13:23:36.922202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:45.610 [2024-12-11 13:23:36.922212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:45.610 [2024-12-11 13:23:36.922221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:45.610 [2024-12-11 13:23:36.922233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:45.610 [2024-12-11 13:23:36.922245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:45.610 [2024-12-11 13:23:36.922273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:45.610 [2024-12-11 13:23:36.922284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:45.610 [2024-12-11 13:23:36.922294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:45.610 [2024-12-11 13:23:36.922305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:45.610 [2024-12-11 13:23:36.922317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:45.610 [2024-12-11 13:23:36.922328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:45.610 [2024-12-11 13:23:36.922339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:45.610 [2024-12-11 13:23:36.922350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:45.610 [2024-12-11 13:23:36.922361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:45.610 [2024-12-11 13:23:36.922414] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:45.610 [2024-12-11 13:23:36.922427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:45.610 [2024-12-11 13:23:36.922448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:45.610 [2024-12-11 13:23:36.922458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:45.610 [2024-12-11 13:23:36.922469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:45.610 [2024-12-11 13:23:36.922481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:36.922493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:45.610 [2024-12-11 13:23:36.922504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:25:45.610 [2024-12-11 13:23:36.922514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:36.971404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:36.971456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.610 [2024-12-11 13:23:36.971488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.912 ms 00:25:45.610 [2024-12-11 13:23:36.971504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:36.971600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:36.971612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:45.610 [2024-12-11 13:23:36.971624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:45.610 [2024-12-11 13:23:36.971634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:37.036220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:37.036274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.610 [2024-12-11 13:23:37.036291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.586 ms 00:25:45.610 [2024-12-11 13:23:37.036302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:37.036359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:37.036376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.610 [2024-12-11 13:23:37.036388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:45.610 [2024-12-11 13:23:37.036399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:37.037234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:37.037258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.610 [2024-12-11 13:23:37.037271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:25:45.610 [2024-12-11 13:23:37.037282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:37.037422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:37.037438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.610 [2024-12-11 13:23:37.037453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:45.610 [2024-12-11 13:23:37.037465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.610 [2024-12-11 13:23:37.060375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.610 [2024-12-11 13:23:37.060424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.610 [2024-12-11 13:23:37.060440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:25:45.610 [2024-12-11 13:23:37.060452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.611 [2024-12-11 13:23:37.080632] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:45.611 [2024-12-11 13:23:37.080673] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:45.611 [2024-12-11 13:23:37.080706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.611 [2024-12-11 13:23:37.080718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:45.611 [2024-12-11 13:23:37.080731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.150 ms 00:25:45.611 [2024-12-11 13:23:37.080741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.611 [2024-12-11 13:23:37.109156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.611 [2024-12-11 13:23:37.109212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:45.611 [2024-12-11 13:23:37.109244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.414 ms 00:25:45.611 [2024-12-11 13:23:37.109255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.611 [2024-12-11 13:23:37.127064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.611 [2024-12-11 13:23:37.127105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:45.611 [2024-12-11 13:23:37.127127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.790 ms 00:25:45.611 [2024-12-11 13:23:37.127138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.611 [2024-12-11 13:23:37.145493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.611 [2024-12-11 13:23:37.145542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:45.611 [2024-12-11 13:23:37.145557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.342 ms 00:25:45.611 [2024-12-11 13:23:37.145583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.611 [2024-12-11 13:23:37.146395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.611 [2024-12-11 13:23:37.146428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:45.611 [2024-12-11 13:23:37.146446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:25:45.611 [2024-12-11 13:23:37.146457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.245237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.245314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:45.870 [2024-12-11 13:23:37.245356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.915 ms 00:25:45.870 [2024-12-11 13:23:37.245368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.256980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:45.870 [2024-12-11 13:23:37.261903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.261941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:45.870 [2024-12-11 13:23:37.261958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.490 ms 00:25:45.870 [2024-12-11 13:23:37.261969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.262135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.262152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:45.870 [2024-12-11 13:23:37.262165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:45.870 [2024-12-11 13:23:37.262181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.264422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.264462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:45.870 [2024-12-11 13:23:37.264475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:25:45.870 [2024-12-11 13:23:37.264486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.264537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.264550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.870 [2024-12-11 13:23:37.264561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:45.870 [2024-12-11 13:23:37.264572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.264623] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:45.870 [2024-12-11 13:23:37.264636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.264647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:45.870 [2024-12-11 13:23:37.264658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:45.870 [2024-12-11 13:23:37.264669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.302384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.302430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.870 [2024-12-11 13:23:37.302470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.753 ms 00:25:45.870 [2024-12-11 13:23:37.302482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.870 [2024-12-11 13:23:37.302570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.870 [2024-12-11 13:23:37.302583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.870 [2024-12-11 13:23:37.302595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:45.870 [2024-12-11 13:23:37.302606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.871 [2024-12-11 13:23:37.304594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.042 ms, result 0 00:25:47.250  [2024-12-11T13:23:39.755Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-11T13:23:40.692Z] Copying: 50/1024 [MB] (26 MBps) [2024-12-11T13:23:41.656Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-11T13:23:42.593Z] Copying: 102/1024 [MB] (26 MBps) [2024-12-11T13:23:43.530Z] Copying: 129/1024 [MB] (26 MBps) [2024-12-11T13:23:44.909Z] Copying: 155/1024 [MB] (26 MBps) [2024-12-11T13:23:45.845Z] Copying: 182/1024 [MB] (26 MBps) [2024-12-11T13:23:46.783Z] Copying: 208/1024 [MB] (26 MBps) [2024-12-11T13:23:47.721Z] Copying: 235/1024 [MB] (26 MBps) [2024-12-11T13:23:48.658Z] Copying: 261/1024 [MB] (26 MBps) [2024-12-11T13:23:49.596Z] Copying: 287/1024 [MB] (26 MBps) [2024-12-11T13:23:50.534Z] Copying: 314/1024 [MB] (26 MBps) [2024-12-11T13:23:51.911Z] Copying: 341/1024 [MB] (26 MBps) [2024-12-11T13:23:52.848Z] Copying: 367/1024 [MB] (26 MBps) [2024-12-11T13:23:53.786Z] Copying: 393/1024 [MB] (26 MBps) [2024-12-11T13:23:54.723Z] Copying: 419/1024 [MB] (26 MBps) [2024-12-11T13:23:55.662Z] Copying: 446/1024 [MB] (26 MBps) [2024-12-11T13:23:56.600Z] Copying: 473/1024 [MB] (26 MBps) [2024-12-11T13:23:57.622Z] Copying: 500/1024 [MB] (27 MBps) [2024-12-11T13:23:58.559Z] Copying: 526/1024 [MB] (26 MBps) [2024-12-11T13:23:59.940Z] Copying: 553/1024 [MB] (26 MBps) [2024-12-11T13:24:00.508Z] Copying: 579/1024 [MB] (26 MBps) [2024-12-11T13:24:01.885Z] Copying: 605/1024 [MB] (26 MBps) [2024-12-11T13:24:02.821Z] Copying: 633/1024 [MB] (27 MBps) [2024-12-11T13:24:03.757Z] Copying: 660/1024 [MB] (27 MBps) [2024-12-11T13:24:04.694Z] Copying: 688/1024 [MB] (28 MBps) [2024-12-11T13:24:05.633Z] Copying: 717/1024 [MB] (28 MBps) [2024-12-11T13:24:06.569Z] Copying: 744/1024 [MB] (27 MBps) [2024-12-11T13:24:07.507Z] Copying: 771/1024 [MB] (26 MBps) [2024-12-11T13:24:08.885Z] Copying: 797/1024 [MB] (25 MBps) [2024-12-11T13:24:09.821Z] Copying: 823/1024 [MB] (26 MBps) [2024-12-11T13:24:10.758Z] Copying: 850/1024 [MB] (26 MBps) [2024-12-11T13:24:11.695Z] Copying: 877/1024 [MB] (27 MBps) [2024-12-11T13:24:12.631Z] Copying: 905/1024 [MB] (27 MBps) [2024-12-11T13:24:13.610Z] Copying: 931/1024 [MB] (26 MBps) [2024-12-11T13:24:14.547Z] Copying: 958/1024 [MB] (26 MBps) [2024-12-11T13:24:15.482Z] Copying: 985/1024 [MB] (26 MBps) [2024-12-11T13:24:16.050Z] Copying: 1011/1024 [MB] (26 MBps) [2024-12-11T13:24:16.618Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-11 13:24:16.324755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.324854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:25.050 [2024-12-11 13:24:16.324879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:25.050 [2024-12-11 13:24:16.324902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.324941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:25.050 [2024-12-11 13:24:16.330587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.330639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:25.050 [2024-12-11 13:24:16.330655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.627 ms 00:26:25.050 [2024-12-11 13:24:16.330667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.330910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.330929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:25.050 [2024-12-11 13:24:16.330942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:26:25.050 [2024-12-11 13:24:16.330964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.335376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.335427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:25.050 [2024-12-11 13:24:16.335451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 00:26:25.050 [2024-12-11 13:24:16.335679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.341879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.341922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:25.050 [2024-12-11 13:24:16.341937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:26:25.050 [2024-12-11 13:24:16.341956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.383343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.383393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:25.050 [2024-12-11 13:24:16.383411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.401 ms 00:26:25.050 [2024-12-11 13:24:16.383423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.404394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.404440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:25.050 [2024-12-11 13:24:16.404457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.940 ms 00:26:25.050 [2024-12-11 13:24:16.404468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.550932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.550998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:25.050 [2024-12-11 13:24:16.551027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 146.646 ms 00:26:25.050 [2024-12-11 13:24:16.551039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.050 [2024-12-11 13:24:16.589638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.050 [2024-12-11 13:24:16.589688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:25.050 [2024-12-11 13:24:16.589706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.641 ms 00:26:25.050 [2024-12-11 13:24:16.589717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.310 [2024-12-11 13:24:16.625683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.310 [2024-12-11 13:24:16.625744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:25.310 [2024-12-11 13:24:16.625761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.980 ms 00:26:25.310 [2024-12-11 13:24:16.625773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.310 [2024-12-11 13:24:16.660690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.310 [2024-12-11 13:24:16.660729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.310 [2024-12-11 13:24:16.660744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.933 ms 00:26:25.310 [2024-12-11 13:24:16.660754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.310 [2024-12-11 13:24:16.695030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.310 [2024-12-11 13:24:16.695071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.310 [2024-12-11 13:24:16.695084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.231 ms 00:26:25.310 [2024-12-11 13:24:16.695095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.311 [2024-12-11 13:24:16.695157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.311 [2024-12-11 13:24:16.695176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:25.311 [2024-12-11 13:24:16.695191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.695999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.311 [2024-12-11 13:24:16.696187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.312 [2024-12-11 13:24:16.696320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.312 [2024-12-11 13:24:16.696331] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ac27823-e4d5-46f8-ad78-b95cbb6bd09a 00:26:25.312 [2024-12-11 13:24:16.696343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:25.312 [2024-12-11 13:24:16.696354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17088 00:26:25.312 [2024-12-11 13:24:16.696364] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16128 00:26:25.312 [2024-12-11 13:24:16.696375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0595 00:26:25.312 [2024-12-11 13:24:16.696393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.312 [2024-12-11 13:24:16.696417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.312 [2024-12-11 13:24:16.696428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.312 [2024-12-11 13:24:16.696437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.312 [2024-12-11 13:24:16.696447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.312 [2024-12-11 13:24:16.696458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.312 [2024-12-11 13:24:16.696469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.312 [2024-12-11 13:24:16.696480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:26:25.312 [2024-12-11 13:24:16.696490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.312 [2024-12-11 13:24:16.717412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.312 [2024-12-11 13:24:16.717446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.312 [2024-12-11 13:24:16.717466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.918 ms 00:26:25.312 [2024-12-11 13:24:16.717477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.312 [2024-12-11 13:24:16.718091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.312 [2024-12-11 13:24:16.718127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.312 [2024-12-11 13:24:16.718142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:26:25.312 [2024-12-11 13:24:16.718153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.312 [2024-12-11 13:24:16.771957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.312 [2024-12-11 13:24:16.772006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.312 [2024-12-11 13:24:16.772021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.312 [2024-12-11 13:24:16.772033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.312 [2024-12-11 13:24:16.772116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.312 [2024-12-11 13:24:16.772136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.312 [2024-12-11 13:24:16.772148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.312 [2024-12-11 13:24:16.772159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.312 [2024-12-11 13:24:16.772265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.312 [2024-12-11 13:24:16.772280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.312 [2024-12-11 13:24:16.772297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.312 [2024-12-11 13:24:16.772308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.312 [2024-12-11 13:24:16.772327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.312 [2024-12-11 13:24:16.772338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.312 [2024-12-11 13:24:16.772349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.312 [2024-12-11 13:24:16.772359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:16.906126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:16.906221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.572 [2024-12-11 13:24:16.906239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:16.906251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.572 [2024-12-11 13:24:17.010163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.572 [2024-12-11 13:24:17.010318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.572 [2024-12-11 13:24:17.010416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.572 [2024-12-11 13:24:17.010589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.572 [2024-12-11 13:24:17.010669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.572 [2024-12-11 13:24:17.010751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.010823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.572 [2024-12-11 13:24:17.010838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.572 [2024-12-11 13:24:17.010849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.572 [2024-12-11 13:24:17.010860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.572 [2024-12-11 13:24:17.011007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 687.340 ms, result 0 00:26:26.951 00:26:26.951 00:26:26.951 13:24:18 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:28.860 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:28.860 13:24:19 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:28.860 13:24:19 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:28.860 13:24:19 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:28.860 Process with pid 80476 is not found 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80476 00:26:28.860 13:24:20 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80476 ']' 00:26:28.860 13:24:20 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80476 00:26:28.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80476) - No such process 00:26:28.860 13:24:20 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 80476 is not found' 00:26:28.860 Remove shared memory files 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:28.860 13:24:20 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:28.860 00:26:28.860 real 3m19.363s 00:26:28.860 user 3m4.440s 00:26:28.860 sys 0m15.634s 00:26:28.860 13:24:20 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.860 13:24:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:28.860 ************************************ 00:26:28.860 END TEST ftl_restore 00:26:28.860 ************************************ 00:26:28.860 13:24:20 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:28.860 13:24:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:28.860 13:24:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.860 13:24:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:28.860 ************************************ 00:26:28.860 START TEST ftl_dirty_shutdown 00:26:28.860 ************************************ 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:28.860 * Looking for test storage... 00:26:28.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.860 --rc genhtml_branch_coverage=1 00:26:28.860 --rc genhtml_function_coverage=1 00:26:28.860 --rc genhtml_legend=1 00:26:28.860 --rc geninfo_all_blocks=1 00:26:28.860 --rc geninfo_unexecuted_blocks=1 00:26:28.860 00:26:28.860 ' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.860 --rc genhtml_branch_coverage=1 00:26:28.860 --rc genhtml_function_coverage=1 00:26:28.860 --rc genhtml_legend=1 00:26:28.860 --rc geninfo_all_blocks=1 00:26:28.860 --rc geninfo_unexecuted_blocks=1 00:26:28.860 00:26:28.860 ' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.860 --rc genhtml_branch_coverage=1 00:26:28.860 --rc genhtml_function_coverage=1 00:26:28.860 --rc genhtml_legend=1 00:26:28.860 --rc geninfo_all_blocks=1 00:26:28.860 --rc geninfo_unexecuted_blocks=1 00:26:28.860 00:26:28.860 ' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.860 --rc genhtml_branch_coverage=1 00:26:28.860 --rc genhtml_function_coverage=1 00:26:28.860 --rc genhtml_legend=1 00:26:28.860 --rc geninfo_all_blocks=1 00:26:28.860 --rc geninfo_unexecuted_blocks=1 00:26:28.860 00:26:28.860 ' 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:28.860 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82573 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82573 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82573 ']' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.121 13:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.121 [2024-12-11 13:24:20.572944] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:26:29.121 [2024-12-11 13:24:20.573091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82573 ] 00:26:29.380 [2024-12-11 13:24:20.759302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.380 [2024-12-11 13:24:20.901446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:30.760 13:24:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:30.760 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:31.020 { 00:26:31.020 "name": "nvme0n1", 00:26:31.020 "aliases": [ 00:26:31.020 "276b859e-9b47-433e-9242-7ac5c7584b6b" 00:26:31.020 ], 00:26:31.020 "product_name": "NVMe disk", 00:26:31.020 "block_size": 4096, 00:26:31.020 "num_blocks": 1310720, 00:26:31.020 "uuid": "276b859e-9b47-433e-9242-7ac5c7584b6b", 00:26:31.020 "numa_id": -1, 00:26:31.020 "assigned_rate_limits": { 00:26:31.020 "rw_ios_per_sec": 0, 00:26:31.020 "rw_mbytes_per_sec": 0, 00:26:31.020 "r_mbytes_per_sec": 0, 00:26:31.020 "w_mbytes_per_sec": 0 00:26:31.020 }, 00:26:31.020 "claimed": true, 00:26:31.020 "claim_type": "read_many_write_one", 00:26:31.020 "zoned": false, 00:26:31.020 "supported_io_types": { 00:26:31.020 "read": true, 00:26:31.020 "write": true, 00:26:31.020 "unmap": true, 00:26:31.020 "flush": true, 00:26:31.020 "reset": true, 00:26:31.020 "nvme_admin": true, 00:26:31.020 "nvme_io": true, 00:26:31.020 "nvme_io_md": false, 00:26:31.020 "write_zeroes": true, 00:26:31.020 "zcopy": false, 00:26:31.020 "get_zone_info": false, 00:26:31.020 "zone_management": false, 00:26:31.020 "zone_append": false, 00:26:31.020 "compare": true, 00:26:31.020 "compare_and_write": false, 00:26:31.020 "abort": true, 00:26:31.020 "seek_hole": false, 00:26:31.020 "seek_data": false, 00:26:31.020 "copy": true, 00:26:31.020 "nvme_iov_md": false 00:26:31.020 }, 00:26:31.020 "driver_specific": { 00:26:31.020 "nvme": [ 00:26:31.020 { 00:26:31.020 "pci_address": "0000:00:11.0", 00:26:31.020 "trid": { 00:26:31.020 "trtype": "PCIe", 00:26:31.020 "traddr": "0000:00:11.0" 00:26:31.020 }, 00:26:31.020 "ctrlr_data": { 00:26:31.020 "cntlid": 0, 00:26:31.020 "vendor_id": "0x1b36", 00:26:31.020 "model_number": "QEMU NVMe Ctrl", 00:26:31.020 "serial_number": "12341", 00:26:31.020 "firmware_revision": "8.0.0", 00:26:31.020 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:31.020 "oacs": { 00:26:31.020 "security": 0, 00:26:31.020 "format": 1, 00:26:31.020 "firmware": 0, 00:26:31.020 "ns_manage": 1 00:26:31.020 }, 00:26:31.020 "multi_ctrlr": false, 00:26:31.020 "ana_reporting": false 00:26:31.020 }, 00:26:31.020 "vs": { 00:26:31.020 "nvme_version": "1.4" 00:26:31.020 }, 00:26:31.020 "ns_data": { 00:26:31.020 "id": 1, 00:26:31.020 "can_share": false 00:26:31.020 } 00:26:31.020 } 00:26:31.020 ], 00:26:31.020 "mp_policy": "active_passive" 00:26:31.020 } 00:26:31.020 } 00:26:31.020 ]' 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:31.020 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:31.280 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=e36bdb91-bc6d-4601-8fb1-84d880e5ef7a 00:26:31.280 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:31.280 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e36bdb91-bc6d-4601-8fb1-84d880e5ef7a 00:26:31.539 13:24:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:31.799 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=667d6507-0413-4d5b-a683-6be6a98979ad 00:26:31.799 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 667d6507-0413-4d5b-a683-6be6a98979ad 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:32.058 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.317 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:32.317 { 00:26:32.317 "name": "95bbba25-6b0b-42c7-9de0-2618225a7b17", 00:26:32.317 "aliases": [ 00:26:32.317 "lvs/nvme0n1p0" 00:26:32.317 ], 00:26:32.317 "product_name": "Logical Volume", 00:26:32.317 "block_size": 4096, 00:26:32.317 "num_blocks": 26476544, 00:26:32.317 "uuid": "95bbba25-6b0b-42c7-9de0-2618225a7b17", 00:26:32.317 "assigned_rate_limits": { 00:26:32.317 "rw_ios_per_sec": 0, 00:26:32.317 "rw_mbytes_per_sec": 0, 00:26:32.317 "r_mbytes_per_sec": 0, 00:26:32.317 "w_mbytes_per_sec": 0 00:26:32.317 }, 00:26:32.317 "claimed": false, 00:26:32.317 "zoned": false, 00:26:32.317 "supported_io_types": { 00:26:32.317 "read": true, 00:26:32.317 "write": true, 00:26:32.317 "unmap": true, 00:26:32.317 "flush": false, 00:26:32.317 "reset": true, 00:26:32.317 "nvme_admin": false, 00:26:32.317 "nvme_io": false, 00:26:32.317 "nvme_io_md": false, 00:26:32.317 "write_zeroes": true, 00:26:32.317 "zcopy": false, 00:26:32.317 "get_zone_info": false, 00:26:32.317 "zone_management": false, 00:26:32.317 "zone_append": false, 00:26:32.317 "compare": false, 00:26:32.317 "compare_and_write": false, 00:26:32.317 "abort": false, 00:26:32.317 "seek_hole": true, 00:26:32.317 "seek_data": true, 00:26:32.317 "copy": false, 00:26:32.317 "nvme_iov_md": false 00:26:32.317 }, 00:26:32.317 "driver_specific": { 00:26:32.317 "lvol": { 00:26:32.317 "lvol_store_uuid": "667d6507-0413-4d5b-a683-6be6a98979ad", 00:26:32.317 "base_bdev": "nvme0n1", 00:26:32.317 "thin_provision": true, 00:26:32.317 "num_allocated_clusters": 0, 00:26:32.317 "snapshot": false, 00:26:32.317 "clone": false, 00:26:32.317 "esnap_clone": false 00:26:32.317 } 00:26:32.317 } 00:26:32.317 } 00:26:32.317 ]' 00:26:32.317 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:32.317 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:32.317 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:32.318 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:32.318 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:32.318 13:24:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:32.318 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:32.318 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:32.318 13:24:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:32.577 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:32.836 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:32.836 { 00:26:32.836 "name": "95bbba25-6b0b-42c7-9de0-2618225a7b17", 00:26:32.836 "aliases": [ 00:26:32.836 "lvs/nvme0n1p0" 00:26:32.836 ], 00:26:32.836 "product_name": "Logical Volume", 00:26:32.836 "block_size": 4096, 00:26:32.836 "num_blocks": 26476544, 00:26:32.836 "uuid": "95bbba25-6b0b-42c7-9de0-2618225a7b17", 00:26:32.836 "assigned_rate_limits": { 00:26:32.837 "rw_ios_per_sec": 0, 00:26:32.837 "rw_mbytes_per_sec": 0, 00:26:32.837 "r_mbytes_per_sec": 0, 00:26:32.837 "w_mbytes_per_sec": 0 00:26:32.837 }, 00:26:32.837 "claimed": false, 00:26:32.837 "zoned": false, 00:26:32.837 "supported_io_types": { 00:26:32.837 "read": true, 00:26:32.837 "write": true, 00:26:32.837 "unmap": true, 00:26:32.837 "flush": false, 00:26:32.837 "reset": true, 00:26:32.837 "nvme_admin": false, 00:26:32.837 "nvme_io": false, 00:26:32.837 "nvme_io_md": false, 00:26:32.837 "write_zeroes": true, 00:26:32.837 "zcopy": false, 00:26:32.837 "get_zone_info": false, 00:26:32.837 "zone_management": false, 00:26:32.837 "zone_append": false, 00:26:32.837 "compare": false, 00:26:32.837 "compare_and_write": false, 00:26:32.837 "abort": false, 00:26:32.837 "seek_hole": true, 00:26:32.837 "seek_data": true, 00:26:32.837 "copy": false, 00:26:32.837 "nvme_iov_md": false 00:26:32.837 }, 00:26:32.837 "driver_specific": { 00:26:32.837 "lvol": { 00:26:32.837 "lvol_store_uuid": "667d6507-0413-4d5b-a683-6be6a98979ad", 00:26:32.837 "base_bdev": "nvme0n1", 00:26:32.837 "thin_provision": true, 00:26:32.837 "num_allocated_clusters": 0, 00:26:32.837 "snapshot": false, 00:26:32.837 "clone": false, 00:26:32.837 "esnap_clone": false 00:26:32.837 } 00:26:32.837 } 00:26:32.837 } 00:26:32.837 ]' 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:32.837 13:24:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:33.096 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95bbba25-6b0b-42c7-9de0-2618225a7b17 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:33.356 { 00:26:33.356 "name": "95bbba25-6b0b-42c7-9de0-2618225a7b17", 00:26:33.356 "aliases": [ 00:26:33.356 "lvs/nvme0n1p0" 00:26:33.356 ], 00:26:33.356 "product_name": "Logical Volume", 00:26:33.356 "block_size": 4096, 00:26:33.356 "num_blocks": 26476544, 00:26:33.356 "uuid": "95bbba25-6b0b-42c7-9de0-2618225a7b17", 00:26:33.356 "assigned_rate_limits": { 00:26:33.356 "rw_ios_per_sec": 0, 00:26:33.356 "rw_mbytes_per_sec": 0, 00:26:33.356 "r_mbytes_per_sec": 0, 00:26:33.356 "w_mbytes_per_sec": 0 00:26:33.356 }, 00:26:33.356 "claimed": false, 00:26:33.356 "zoned": false, 00:26:33.356 "supported_io_types": { 00:26:33.356 "read": true, 00:26:33.356 "write": true, 00:26:33.356 "unmap": true, 00:26:33.356 "flush": false, 00:26:33.356 "reset": true, 00:26:33.356 "nvme_admin": false, 00:26:33.356 "nvme_io": false, 00:26:33.356 "nvme_io_md": false, 00:26:33.356 "write_zeroes": true, 00:26:33.356 "zcopy": false, 00:26:33.356 "get_zone_info": false, 00:26:33.356 "zone_management": false, 00:26:33.356 "zone_append": false, 00:26:33.356 "compare": false, 00:26:33.356 "compare_and_write": false, 00:26:33.356 "abort": false, 00:26:33.356 "seek_hole": true, 00:26:33.356 "seek_data": true, 00:26:33.356 "copy": false, 00:26:33.356 "nvme_iov_md": false 00:26:33.356 }, 00:26:33.356 "driver_specific": { 00:26:33.356 "lvol": { 00:26:33.356 "lvol_store_uuid": "667d6507-0413-4d5b-a683-6be6a98979ad", 00:26:33.356 "base_bdev": "nvme0n1", 00:26:33.356 "thin_provision": true, 00:26:33.356 "num_allocated_clusters": 0, 00:26:33.356 "snapshot": false, 00:26:33.356 "clone": false, 00:26:33.356 "esnap_clone": false 00:26:33.356 } 00:26:33.356 } 00:26:33.356 } 00:26:33.356 ]' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 95bbba25-6b0b-42c7-9de0-2618225a7b17 --l2p_dram_limit 10' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:33.356 13:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 95bbba25-6b0b-42c7-9de0-2618225a7b17 --l2p_dram_limit 10 -c nvc0n1p0 00:26:33.617 [2024-12-11 13:24:25.109891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.109962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:33.617 [2024-12-11 13:24:25.109984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:33.617 [2024-12-11 13:24:25.109997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.110079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.110092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.617 [2024-12-11 13:24:25.110107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:33.617 [2024-12-11 13:24:25.110132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.110167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:33.617 [2024-12-11 13:24:25.111333] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:33.617 [2024-12-11 13:24:25.111377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.111389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.617 [2024-12-11 13:24:25.111404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:26:33.617 [2024-12-11 13:24:25.111415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.111558] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fc0ae49a-7b70-485f-90a1-9e0399327912 00:26:33.617 [2024-12-11 13:24:25.113961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.114005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:33.617 [2024-12-11 13:24:25.114019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:33.617 [2024-12-11 13:24:25.114033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.127773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.127823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.617 [2024-12-11 13:24:25.127854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.689 ms 00:26:33.617 [2024-12-11 13:24:25.127869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.127993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.128012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.617 [2024-12-11 13:24:25.128024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:33.617 [2024-12-11 13:24:25.128043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.128173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.128193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:33.617 [2024-12-11 13:24:25.128205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:33.617 [2024-12-11 13:24:25.128223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.128253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:33.617 [2024-12-11 13:24:25.134756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.134795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.617 [2024-12-11 13:24:25.134812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.517 ms 00:26:33.617 [2024-12-11 13:24:25.134823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.134869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.134881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:33.617 [2024-12-11 13:24:25.134895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:33.617 [2024-12-11 13:24:25.134906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.134953] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:33.617 [2024-12-11 13:24:25.135106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:33.617 [2024-12-11 13:24:25.135140] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:33.617 [2024-12-11 13:24:25.135154] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:33.617 [2024-12-11 13:24:25.135172] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135184] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135199] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:33.617 [2024-12-11 13:24:25.135210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:33.617 [2024-12-11 13:24:25.135230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:33.617 [2024-12-11 13:24:25.135241] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:33.617 [2024-12-11 13:24:25.135255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.135278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:33.617 [2024-12-11 13:24:25.135293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:26:33.617 [2024-12-11 13:24:25.135302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.135385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.617 [2024-12-11 13:24:25.135396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:33.617 [2024-12-11 13:24:25.135409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:33.617 [2024-12-11 13:24:25.135419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.617 [2024-12-11 13:24:25.135519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:33.617 [2024-12-11 13:24:25.135533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:33.617 [2024-12-11 13:24:25.135547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:33.617 [2024-12-11 13:24:25.135580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:33.617 [2024-12-11 13:24:25.135615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.617 [2024-12-11 13:24:25.135640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:33.617 [2024-12-11 13:24:25.135650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:33.617 [2024-12-11 13:24:25.135662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.617 [2024-12-11 13:24:25.135672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:33.617 [2024-12-11 13:24:25.135685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:33.617 [2024-12-11 13:24:25.135694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:33.617 [2024-12-11 13:24:25.135721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:33.617 [2024-12-11 13:24:25.135756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:33.617 [2024-12-11 13:24:25.135787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:33.617 [2024-12-11 13:24:25.135821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:33.617 [2024-12-11 13:24:25.135852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.617 [2024-12-11 13:24:25.135873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:33.617 [2024-12-11 13:24:25.135889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.617 [2024-12-11 13:24:25.135911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:33.617 [2024-12-11 13:24:25.135920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:33.617 [2024-12-11 13:24:25.135934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.617 [2024-12-11 13:24:25.135943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:33.617 [2024-12-11 13:24:25.135956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:33.617 [2024-12-11 13:24:25.135965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.617 [2024-12-11 13:24:25.135978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:33.617 [2024-12-11 13:24:25.135987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:33.617 [2024-12-11 13:24:25.136000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.617 [2024-12-11 13:24:25.136009] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:33.617 [2024-12-11 13:24:25.136022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:33.618 [2024-12-11 13:24:25.136033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.618 [2024-12-11 13:24:25.136046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.618 [2024-12-11 13:24:25.136058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:33.618 [2024-12-11 13:24:25.136075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:33.618 [2024-12-11 13:24:25.136085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:33.618 [2024-12-11 13:24:25.136098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:33.618 [2024-12-11 13:24:25.136107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:33.618 [2024-12-11 13:24:25.136129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:33.618 [2024-12-11 13:24:25.136141] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:33.618 [2024-12-11 13:24:25.136157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:33.618 [2024-12-11 13:24:25.136187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:33.618 [2024-12-11 13:24:25.136198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:33.618 [2024-12-11 13:24:25.136211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:33.618 [2024-12-11 13:24:25.136222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:33.618 [2024-12-11 13:24:25.136235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:33.618 [2024-12-11 13:24:25.136245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:33.618 [2024-12-11 13:24:25.136260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:33.618 [2024-12-11 13:24:25.136272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:33.618 [2024-12-11 13:24:25.136288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:33.618 [2024-12-11 13:24:25.136346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:33.618 [2024-12-11 13:24:25.136361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:33.618 [2024-12-11 13:24:25.136385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:33.618 [2024-12-11 13:24:25.136395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:33.618 [2024-12-11 13:24:25.136409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:33.618 [2024-12-11 13:24:25.136420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.618 [2024-12-11 13:24:25.136434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:33.618 [2024-12-11 13:24:25.136444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:26:33.618 [2024-12-11 13:24:25.136459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.618 [2024-12-11 13:24:25.136509] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:33.618 [2024-12-11 13:24:25.136529] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:36.959 [2024-12-11 13:24:28.396445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.396559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:36.959 [2024-12-11 13:24:28.396579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3265.226 ms 00:26:36.959 [2024-12-11 13:24:28.396594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.959 [2024-12-11 13:24:28.444200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.444264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.959 [2024-12-11 13:24:28.444283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.266 ms 00:26:36.959 [2024-12-11 13:24:28.444298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.959 [2024-12-11 13:24:28.444489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.444507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:36.959 [2024-12-11 13:24:28.444519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:36.959 [2024-12-11 13:24:28.444541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.959 [2024-12-11 13:24:28.497973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.498038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.959 [2024-12-11 13:24:28.498055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.446 ms 00:26:36.959 [2024-12-11 13:24:28.498070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.959 [2024-12-11 13:24:28.498147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.498170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.959 [2024-12-11 13:24:28.498182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:36.959 [2024-12-11 13:24:28.498207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.959 [2024-12-11 13:24:28.499042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.499071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.959 [2024-12-11 13:24:28.499083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:26:36.959 [2024-12-11 13:24:28.499096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.959 [2024-12-11 13:24:28.499231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.959 [2024-12-11 13:24:28.499248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.959 [2024-12-11 13:24:28.499263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:36.959 [2024-12-11 13:24:28.499280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.525402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.525472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.218 [2024-12-11 13:24:28.525489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:26:37.218 [2024-12-11 13:24:28.525504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.550615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:37.218 [2024-12-11 13:24:28.555788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.555825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:37.218 [2024-12-11 13:24:28.555861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.172 ms 00:26:37.218 [2024-12-11 13:24:28.555872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.646040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.646134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:37.218 [2024-12-11 13:24:28.646158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.242 ms 00:26:37.218 [2024-12-11 13:24:28.646170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.646403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.646422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:37.218 [2024-12-11 13:24:28.646441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:26:37.218 [2024-12-11 13:24:28.646453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.684369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.684447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:37.218 [2024-12-11 13:24:28.684470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.907 ms 00:26:37.218 [2024-12-11 13:24:28.684482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.720799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.720855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:37.218 [2024-12-11 13:24:28.720877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.311 ms 00:26:37.218 [2024-12-11 13:24:28.720889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.218 [2024-12-11 13:24:28.721663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.218 [2024-12-11 13:24:28.721692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:37.218 [2024-12-11 13:24:28.721710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:26:37.218 [2024-12-11 13:24:28.721724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.823586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.478 [2024-12-11 13:24:28.823673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:37.478 [2024-12-11 13:24:28.823704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.951 ms 00:26:37.478 [2024-12-11 13:24:28.823717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.864016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.478 [2024-12-11 13:24:28.864082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:37.478 [2024-12-11 13:24:28.864104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.247 ms 00:26:37.478 [2024-12-11 13:24:28.864124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.902381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.478 [2024-12-11 13:24:28.902439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:37.478 [2024-12-11 13:24:28.902460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.259 ms 00:26:37.478 [2024-12-11 13:24:28.902471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.939595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.478 [2024-12-11 13:24:28.939651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:37.478 [2024-12-11 13:24:28.939672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.125 ms 00:26:37.478 [2024-12-11 13:24:28.939699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.939756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.478 [2024-12-11 13:24:28.939769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:37.478 [2024-12-11 13:24:28.939791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:37.478 [2024-12-11 13:24:28.939803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.939934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.478 [2024-12-11 13:24:28.939952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:37.478 [2024-12-11 13:24:28.939966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:37.478 [2024-12-11 13:24:28.939976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.478 [2024-12-11 13:24:28.941348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3837.162 ms, result 0 00:26:37.478 { 00:26:37.478 "name": "ftl0", 00:26:37.478 "uuid": "fc0ae49a-7b70-485f-90a1-9e0399327912" 00:26:37.478 } 00:26:37.478 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:37.478 13:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:37.737 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:37.737 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:37.737 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:37.996 /dev/nbd0 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:37.996 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:37.997 1+0 records in 00:26:37.997 1+0 records out 00:26:37.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399786 s, 10.2 MB/s 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:37.997 13:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:37.997 [2024-12-11 13:24:29.534424] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:26:37.997 [2024-12-11 13:24:29.534566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82716 ] 00:26:38.256 [2024-12-11 13:24:29.719713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.515 [2024-12-11 13:24:29.863715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.893  [2024-12-11T13:24:32.399Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-11T13:24:33.338Z] Copying: 390/1024 [MB] (195 MBps) [2024-12-11T13:24:34.276Z] Copying: 586/1024 [MB] (195 MBps) [2024-12-11T13:24:35.662Z] Copying: 780/1024 [MB] (194 MBps) [2024-12-11T13:24:35.662Z] Copying: 966/1024 [MB] (185 MBps) [2024-12-11T13:24:37.042Z] Copying: 1024/1024 [MB] (average 192 MBps) 00:26:45.474 00:26:45.474 13:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:47.381 13:24:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:47.381 [2024-12-11 13:24:38.668840] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:26:47.381 [2024-12-11 13:24:38.669664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82809 ] 00:26:47.381 [2024-12-11 13:24:38.854761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.640 [2024-12-11 13:24:38.996626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.030  [2024-12-11T13:24:41.542Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-11T13:24:42.480Z] Copying: 34/1024 [MB] (17 MBps) [2024-12-11T13:24:43.416Z] Copying: 52/1024 [MB] (17 MBps) [2024-12-11T13:24:44.793Z] Copying: 69/1024 [MB] (17 MBps) [2024-12-11T13:24:45.733Z] Copying: 87/1024 [MB] (17 MBps) [2024-12-11T13:24:46.670Z] Copying: 104/1024 [MB] (17 MBps) [2024-12-11T13:24:47.607Z] Copying: 121/1024 [MB] (17 MBps) [2024-12-11T13:24:48.545Z] Copying: 138/1024 [MB] (17 MBps) [2024-12-11T13:24:49.482Z] Copying: 155/1024 [MB] (16 MBps) [2024-12-11T13:24:50.418Z] Copying: 172/1024 [MB] (17 MBps) [2024-12-11T13:24:51.794Z] Copying: 190/1024 [MB] (17 MBps) [2024-12-11T13:24:52.362Z] Copying: 208/1024 [MB] (17 MBps) [2024-12-11T13:24:53.768Z] Copying: 225/1024 [MB] (17 MBps) [2024-12-11T13:24:54.706Z] Copying: 242/1024 [MB] (17 MBps) [2024-12-11T13:24:55.643Z] Copying: 259/1024 [MB] (17 MBps) [2024-12-11T13:24:56.581Z] Copying: 277/1024 [MB] (17 MBps) [2024-12-11T13:24:57.519Z] Copying: 294/1024 [MB] (17 MBps) [2024-12-11T13:24:58.456Z] Copying: 311/1024 [MB] (17 MBps) [2024-12-11T13:24:59.394Z] Copying: 328/1024 [MB] (17 MBps) [2024-12-11T13:25:00.773Z] Copying: 345/1024 [MB] (16 MBps) [2024-12-11T13:25:01.341Z] Copying: 362/1024 [MB] (16 MBps) [2024-12-11T13:25:02.721Z] Copying: 379/1024 [MB] (16 MBps) [2024-12-11T13:25:03.658Z] Copying: 396/1024 [MB] (16 MBps) [2024-12-11T13:25:04.595Z] Copying: 413/1024 [MB] (17 MBps) [2024-12-11T13:25:05.542Z] Copying: 431/1024 [MB] (17 MBps) [2024-12-11T13:25:06.517Z] Copying: 448/1024 [MB] (17 MBps) [2024-12-11T13:25:07.454Z] Copying: 466/1024 [MB] (17 MBps) [2024-12-11T13:25:08.390Z] Copying: 483/1024 [MB] (17 MBps) [2024-12-11T13:25:09.766Z] Copying: 500/1024 [MB] (17 MBps) [2024-12-11T13:25:10.334Z] Copying: 517/1024 [MB] (17 MBps) [2024-12-11T13:25:11.710Z] Copying: 535/1024 [MB] (17 MBps) [2024-12-11T13:25:12.647Z] Copying: 552/1024 [MB] (17 MBps) [2024-12-11T13:25:13.582Z] Copying: 570/1024 [MB] (17 MBps) [2024-12-11T13:25:14.519Z] Copying: 587/1024 [MB] (17 MBps) [2024-12-11T13:25:15.455Z] Copying: 604/1024 [MB] (17 MBps) [2024-12-11T13:25:16.392Z] Copying: 622/1024 [MB] (17 MBps) [2024-12-11T13:25:17.329Z] Copying: 640/1024 [MB] (17 MBps) [2024-12-11T13:25:18.708Z] Copying: 657/1024 [MB] (17 MBps) [2024-12-11T13:25:19.646Z] Copying: 675/1024 [MB] (17 MBps) [2024-12-11T13:25:20.614Z] Copying: 692/1024 [MB] (17 MBps) [2024-12-11T13:25:21.550Z] Copying: 710/1024 [MB] (17 MBps) [2024-12-11T13:25:22.487Z] Copying: 727/1024 [MB] (16 MBps) [2024-12-11T13:25:23.424Z] Copying: 744/1024 [MB] (17 MBps) [2024-12-11T13:25:24.362Z] Copying: 761/1024 [MB] (16 MBps) [2024-12-11T13:25:25.738Z] Copying: 778/1024 [MB] (17 MBps) [2024-12-11T13:25:26.306Z] Copying: 795/1024 [MB] (16 MBps) [2024-12-11T13:25:27.683Z] Copying: 811/1024 [MB] (16 MBps) [2024-12-11T13:25:28.620Z] Copying: 827/1024 [MB] (16 MBps) [2024-12-11T13:25:29.557Z] Copying: 843/1024 [MB] (16 MBps) [2024-12-11T13:25:30.494Z] Copying: 860/1024 [MB] (16 MBps) [2024-12-11T13:25:31.430Z] Copying: 877/1024 [MB] (16 MBps) [2024-12-11T13:25:32.366Z] Copying: 893/1024 [MB] (16 MBps) [2024-12-11T13:25:33.303Z] Copying: 910/1024 [MB] (16 MBps) [2024-12-11T13:25:34.722Z] Copying: 927/1024 [MB] (16 MBps) [2024-12-11T13:25:35.289Z] Copying: 944/1024 [MB] (17 MBps) [2024-12-11T13:25:36.663Z] Copying: 961/1024 [MB] (16 MBps) [2024-12-11T13:25:37.598Z] Copying: 978/1024 [MB] (16 MBps) [2024-12-11T13:25:38.533Z] Copying: 995/1024 [MB] (17 MBps) [2024-12-11T13:25:39.100Z] Copying: 1012/1024 [MB] (16 MBps) [2024-12-11T13:25:40.475Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:27:48.907 00:27:48.908 13:25:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:48.908 13:25:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:48.908 13:25:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:49.166 [2024-12-11 13:25:40.640943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.641015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:49.166 [2024-12-11 13:25:40.641034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:49.166 [2024-12-11 13:25:40.641049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.641080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:49.166 [2024-12-11 13:25:40.645221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.645260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:49.166 [2024-12-11 13:25:40.645279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:27:49.166 [2024-12-11 13:25:40.645291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.647406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.647451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:49.166 [2024-12-11 13:25:40.647472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:27:49.166 [2024-12-11 13:25:40.647485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.664796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.664843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:49.166 [2024-12-11 13:25:40.664861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:27:49.166 [2024-12-11 13:25:40.664874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.669566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.669603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:49.166 [2024-12-11 13:25:40.669620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:27:49.166 [2024-12-11 13:25:40.669632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.703570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.703611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:49.166 [2024-12-11 13:25:40.703630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.908 ms 00:27:49.166 [2024-12-11 13:25:40.703642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.724821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.724862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:49.166 [2024-12-11 13:25:40.724886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.158 ms 00:27:49.166 [2024-12-11 13:25:40.724899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.166 [2024-12-11 13:25:40.725053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.166 [2024-12-11 13:25:40.725070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:49.166 [2024-12-11 13:25:40.725086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:27:49.166 [2024-12-11 13:25:40.725098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.425 [2024-12-11 13:25:40.759216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.425 [2024-12-11 13:25:40.759256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:49.425 [2024-12-11 13:25:40.759275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.126 ms 00:27:49.425 [2024-12-11 13:25:40.759286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.425 [2024-12-11 13:25:40.792810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.425 [2024-12-11 13:25:40.792851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:49.425 [2024-12-11 13:25:40.792870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.526 ms 00:27:49.425 [2024-12-11 13:25:40.792882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.425 [2024-12-11 13:25:40.825818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.425 [2024-12-11 13:25:40.825858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:49.425 [2024-12-11 13:25:40.825876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.934 ms 00:27:49.425 [2024-12-11 13:25:40.825887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.425 [2024-12-11 13:25:40.859099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.425 [2024-12-11 13:25:40.859149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:49.425 [2024-12-11 13:25:40.859167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.159 ms 00:27:49.425 [2024-12-11 13:25:40.859179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.425 [2024-12-11 13:25:40.859226] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:49.425 [2024-12-11 13:25:40.859245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:49.425 [2024-12-11 13:25:40.859967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.859982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.859994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:49.426 [2024-12-11 13:25:40.860678] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:49.426 [2024-12-11 13:25:40.860693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc0ae49a-7b70-485f-90a1-9e0399327912 00:27:49.426 [2024-12-11 13:25:40.860705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:49.426 [2024-12-11 13:25:40.860722] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:49.426 [2024-12-11 13:25:40.860734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:49.426 [2024-12-11 13:25:40.860753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:49.426 [2024-12-11 13:25:40.860764] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:49.426 [2024-12-11 13:25:40.860778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:49.426 [2024-12-11 13:25:40.860790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:49.426 [2024-12-11 13:25:40.860803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:49.426 [2024-12-11 13:25:40.860813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:49.426 [2024-12-11 13:25:40.860827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.426 [2024-12-11 13:25:40.860839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:49.426 [2024-12-11 13:25:40.860855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:27:49.426 [2024-12-11 13:25:40.860867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.426 [2024-12-11 13:25:40.879873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.426 [2024-12-11 13:25:40.879911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:49.426 [2024-12-11 13:25:40.879928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.976 ms 00:27:49.426 [2024-12-11 13:25:40.879940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.426 [2024-12-11 13:25:40.880472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.426 [2024-12-11 13:25:40.880492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:49.426 [2024-12-11 13:25:40.880508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:27:49.426 [2024-12-11 13:25:40.880520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.426 [2024-12-11 13:25:40.942421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.426 [2024-12-11 13:25:40.942458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.426 [2024-12-11 13:25:40.942475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.426 [2024-12-11 13:25:40.942487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.426 [2024-12-11 13:25:40.942543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.426 [2024-12-11 13:25:40.942555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.426 [2024-12-11 13:25:40.942570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.426 [2024-12-11 13:25:40.942582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.426 [2024-12-11 13:25:40.942682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.426 [2024-12-11 13:25:40.942701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.426 [2024-12-11 13:25:40.942716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.426 [2024-12-11 13:25:40.942727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.426 [2024-12-11 13:25:40.942755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.426 [2024-12-11 13:25:40.942767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.426 [2024-12-11 13:25:40.942782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.426 [2024-12-11 13:25:40.942794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.684 [2024-12-11 13:25:41.057756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.684 [2024-12-11 13:25:41.057810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.684 [2024-12-11 13:25:41.057828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.684 [2024-12-11 13:25:41.057840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.684 [2024-12-11 13:25:41.154159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.684 [2024-12-11 13:25:41.154208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.684 [2024-12-11 13:25:41.154226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.684 [2024-12-11 13:25:41.154239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.154364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.685 [2024-12-11 13:25:41.154379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.685 [2024-12-11 13:25:41.154399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.685 [2024-12-11 13:25:41.154412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.154482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.685 [2024-12-11 13:25:41.154495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.685 [2024-12-11 13:25:41.154510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.685 [2024-12-11 13:25:41.154522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.154648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.685 [2024-12-11 13:25:41.154663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.685 [2024-12-11 13:25:41.154678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.685 [2024-12-11 13:25:41.154692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.154740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.685 [2024-12-11 13:25:41.154753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:49.685 [2024-12-11 13:25:41.154768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.685 [2024-12-11 13:25:41.154780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.154826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.685 [2024-12-11 13:25:41.154838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.685 [2024-12-11 13:25:41.154853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.685 [2024-12-11 13:25:41.154868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.154919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.685 [2024-12-11 13:25:41.154933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.685 [2024-12-11 13:25:41.154947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.685 [2024-12-11 13:25:41.154959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.685 [2024-12-11 13:25:41.155106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 514.949 ms, result 0 00:27:49.685 true 00:27:49.685 13:25:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82573 00:27:49.685 13:25:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82573 00:27:49.685 13:25:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:49.943 [2024-12-11 13:25:41.301318] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:27:49.943 [2024-12-11 13:25:41.301448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83444 ] 00:27:49.943 [2024-12-11 13:25:41.488542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.201 [2024-12-11 13:25:41.597355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.578  [2024-12-11T13:25:44.082Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-11T13:25:45.015Z] Copying: 402/1024 [MB] (201 MBps) [2024-12-11T13:25:45.953Z] Copying: 607/1024 [MB] (205 MBps) [2024-12-11T13:25:46.890Z] Copying: 809/1024 [MB] (202 MBps) [2024-12-11T13:25:47.148Z] Copying: 1007/1024 [MB] (197 MBps) [2024-12-11T13:25:48.083Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:27:56.515 00:27:56.515 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82573 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:56.515 13:25:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:56.773 [2024-12-11 13:25:48.138791] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:27:56.773 [2024-12-11 13:25:48.138928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83521 ] 00:27:56.773 [2024-12-11 13:25:48.326747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.032 [2024-12-11 13:25:48.429664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.290 [2024-12-11 13:25:48.780682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.290 [2024-12-11 13:25:48.780753] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.290 [2024-12-11 13:25:48.847000] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:57.290 [2024-12-11 13:25:48.847340] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:57.290 [2024-12-11 13:25:48.847629] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:57.859 [2024-12-11 13:25:49.171246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.171297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:57.859 [2024-12-11 13:25:49.171313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:57.859 [2024-12-11 13:25:49.171328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.171377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.171391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.859 [2024-12-11 13:25:49.171403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:57.859 [2024-12-11 13:25:49.171415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.171439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:57.859 [2024-12-11 13:25:49.172412] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:57.859 [2024-12-11 13:25:49.172438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.172450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.859 [2024-12-11 13:25:49.172463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:27:57.859 [2024-12-11 13:25:49.172474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.174047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:57.859 [2024-12-11 13:25:49.192657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.192698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:57.859 [2024-12-11 13:25:49.192713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.641 ms 00:27:57.859 [2024-12-11 13:25:49.192726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.192794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.192809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:57.859 [2024-12-11 13:25:49.192821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:57.859 [2024-12-11 13:25:49.192833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.199681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.199711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.859 [2024-12-11 13:25:49.199724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.780 ms 00:27:57.859 [2024-12-11 13:25:49.199736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.199816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.199831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.859 [2024-12-11 13:25:49.199844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:57.859 [2024-12-11 13:25:49.199856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.199904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.199916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:57.859 [2024-12-11 13:25:49.199929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:57.859 [2024-12-11 13:25:49.199940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.199967] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.859 [2024-12-11 13:25:49.204667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.204701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.859 [2024-12-11 13:25:49.204714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.713 ms 00:27:57.859 [2024-12-11 13:25:49.204726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.204763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.204776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:57.859 [2024-12-11 13:25:49.204788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:57.859 [2024-12-11 13:25:49.204800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.204861] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:57.859 [2024-12-11 13:25:49.204889] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:57.859 [2024-12-11 13:25:49.204926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:57.859 [2024-12-11 13:25:49.204944] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:57.859 [2024-12-11 13:25:49.205032] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:57.859 [2024-12-11 13:25:49.205047] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:57.859 [2024-12-11 13:25:49.205062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:57.859 [2024-12-11 13:25:49.205079] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:57.859 [2024-12-11 13:25:49.205093] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:57.859 [2024-12-11 13:25:49.205105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:57.859 [2024-12-11 13:25:49.205131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:57.859 [2024-12-11 13:25:49.205143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:57.859 [2024-12-11 13:25:49.205154] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:57.859 [2024-12-11 13:25:49.205166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.205177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:57.859 [2024-12-11 13:25:49.205189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:27:57.859 [2024-12-11 13:25:49.205200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.205274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.859 [2024-12-11 13:25:49.205291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:57.859 [2024-12-11 13:25:49.205303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:57.859 [2024-12-11 13:25:49.205314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.859 [2024-12-11 13:25:49.205398] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:57.859 [2024-12-11 13:25:49.205419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:57.859 [2024-12-11 13:25:49.205431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.859 [2024-12-11 13:25:49.205443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.859 [2024-12-11 13:25:49.205459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:57.859 [2024-12-11 13:25:49.205471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:57.859 [2024-12-11 13:25:49.205482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:57.859 [2024-12-11 13:25:49.205492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:57.859 [2024-12-11 13:25:49.205503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:57.859 [2024-12-11 13:25:49.205535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.859 [2024-12-11 13:25:49.205546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:57.859 [2024-12-11 13:25:49.205556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:57.859 [2024-12-11 13:25:49.205566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.859 [2024-12-11 13:25:49.205577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:57.860 [2024-12-11 13:25:49.205587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:57.860 [2024-12-11 13:25:49.205597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:57.860 [2024-12-11 13:25:49.205618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:57.860 [2024-12-11 13:25:49.205649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:57.860 [2024-12-11 13:25:49.205680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:57.860 [2024-12-11 13:25:49.205709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:57.860 [2024-12-11 13:25:49.205739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:57.860 [2024-12-11 13:25:49.205769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.860 [2024-12-11 13:25:49.205789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:57.860 [2024-12-11 13:25:49.205800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:57.860 [2024-12-11 13:25:49.205813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.860 [2024-12-11 13:25:49.205825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:57.860 [2024-12-11 13:25:49.205835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:57.860 [2024-12-11 13:25:49.205845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:57.860 [2024-12-11 13:25:49.205866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:57.860 [2024-12-11 13:25:49.205875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205885] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:57.860 [2024-12-11 13:25:49.205896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:57.860 [2024-12-11 13:25:49.205912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.860 [2024-12-11 13:25:49.205934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:57.860 [2024-12-11 13:25:49.205944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:57.860 [2024-12-11 13:25:49.205954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:57.860 [2024-12-11 13:25:49.205965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:57.860 [2024-12-11 13:25:49.205975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:57.860 [2024-12-11 13:25:49.205985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:57.860 [2024-12-11 13:25:49.205998] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:57.860 [2024-12-11 13:25:49.206010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:57.860 [2024-12-11 13:25:49.206034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:57.860 [2024-12-11 13:25:49.206045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:57.860 [2024-12-11 13:25:49.206056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:57.860 [2024-12-11 13:25:49.206067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:57.860 [2024-12-11 13:25:49.206079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:57.860 [2024-12-11 13:25:49.206091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:57.860 [2024-12-11 13:25:49.206102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:57.860 [2024-12-11 13:25:49.206125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:57.860 [2024-12-11 13:25:49.206138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:57.860 [2024-12-11 13:25:49.206201] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:57.860 [2024-12-11 13:25:49.206214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.860 [2024-12-11 13:25:49.206239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:57.860 [2024-12-11 13:25:49.206252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:57.860 [2024-12-11 13:25:49.206263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:57.860 [2024-12-11 13:25:49.206275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.206286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:57.860 [2024-12-11 13:25:49.206298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:27:57.860 [2024-12-11 13:25:49.206308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.246090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.246140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.860 [2024-12-11 13:25:49.246155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.792 ms 00:27:57.860 [2024-12-11 13:25:49.246167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.246246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.246259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:57.860 [2024-12-11 13:25:49.246271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:57.860 [2024-12-11 13:25:49.246282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.328181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.328219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.860 [2024-12-11 13:25:49.328238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.971 ms 00:27:57.860 [2024-12-11 13:25:49.328250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.328297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.328311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.860 [2024-12-11 13:25:49.328324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:57.860 [2024-12-11 13:25:49.328336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.328864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.328888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.860 [2024-12-11 13:25:49.328902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:27:57.860 [2024-12-11 13:25:49.328918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.329038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.329062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.860 [2024-12-11 13:25:49.329076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:57.860 [2024-12-11 13:25:49.329087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.347854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.347890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.860 [2024-12-11 13:25:49.347906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.773 ms 00:27:57.860 [2024-12-11 13:25:49.347918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.366793] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:57.860 [2024-12-11 13:25:49.366836] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:57.860 [2024-12-11 13:25:49.366853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.366867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:57.860 [2024-12-11 13:25:49.366879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.854 ms 00:27:57.860 [2024-12-11 13:25:49.366891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.860 [2024-12-11 13:25:49.395558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.860 [2024-12-11 13:25:49.395600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:57.860 [2024-12-11 13:25:49.395616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.665 ms 00:27:57.860 [2024-12-11 13:25:49.395628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.861 [2024-12-11 13:25:49.413405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.861 [2024-12-11 13:25:49.413459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:57.861 [2024-12-11 13:25:49.413475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.758 ms 00:27:57.861 [2024-12-11 13:25:49.413486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.430398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.430440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:58.120 [2024-12-11 13:25:49.430454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.891 ms 00:27:58.120 [2024-12-11 13:25:49.430466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.431243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.431280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:58.120 [2024-12-11 13:25:49.431294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:27:58.120 [2024-12-11 13:25:49.431307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.517108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.517171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:58.120 [2024-12-11 13:25:49.517190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.915 ms 00:27:58.120 [2024-12-11 13:25:49.517202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.527025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:58.120 [2024-12-11 13:25:49.529251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.529285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:58.120 [2024-12-11 13:25:49.529301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.019 ms 00:27:58.120 [2024-12-11 13:25:49.529319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.529400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.529416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:58.120 [2024-12-11 13:25:49.529429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:58.120 [2024-12-11 13:25:49.529442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.529526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.529542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:58.120 [2024-12-11 13:25:49.529554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:58.120 [2024-12-11 13:25:49.529566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.529597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.529621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:58.120 [2024-12-11 13:25:49.529633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:58.120 [2024-12-11 13:25:49.529644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.529685] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:58.120 [2024-12-11 13:25:49.529700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.529713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:58.120 [2024-12-11 13:25:49.529726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:58.120 [2024-12-11 13:25:49.529743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.563570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.563614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.120 [2024-12-11 13:25:49.563630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.858 ms 00:27:58.120 [2024-12-11 13:25:49.563642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.563718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.120 [2024-12-11 13:25:49.563733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.120 [2024-12-11 13:25:49.563745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:58.120 [2024-12-11 13:25:49.563757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.120 [2024-12-11 13:25:49.564874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.821 ms, result 0 00:27:59.136  [2024-12-11T13:25:51.649Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-11T13:25:52.587Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-11T13:25:53.966Z] Copying: 73/1024 [MB] (23 MBps) [2024-12-11T13:25:54.906Z] Copying: 95/1024 [MB] (22 MBps) [2024-12-11T13:25:55.842Z] Copying: 117/1024 [MB] (22 MBps) [2024-12-11T13:25:56.780Z] Copying: 141/1024 [MB] (23 MBps) [2024-12-11T13:25:57.717Z] Copying: 164/1024 [MB] (23 MBps) [2024-12-11T13:25:58.654Z] Copying: 186/1024 [MB] (22 MBps) [2024-12-11T13:25:59.591Z] Copying: 209/1024 [MB] (23 MBps) [2024-12-11T13:26:00.970Z] Copying: 233/1024 [MB] (23 MBps) [2024-12-11T13:26:01.908Z] Copying: 257/1024 [MB] (23 MBps) [2024-12-11T13:26:02.845Z] Copying: 279/1024 [MB] (22 MBps) [2024-12-11T13:26:03.782Z] Copying: 302/1024 [MB] (22 MBps) [2024-12-11T13:26:04.720Z] Copying: 324/1024 [MB] (22 MBps) [2024-12-11T13:26:05.660Z] Copying: 347/1024 [MB] (22 MBps) [2024-12-11T13:26:06.598Z] Copying: 369/1024 [MB] (22 MBps) [2024-12-11T13:26:07.977Z] Copying: 392/1024 [MB] (22 MBps) [2024-12-11T13:26:08.915Z] Copying: 413/1024 [MB] (21 MBps) [2024-12-11T13:26:09.853Z] Copying: 436/1024 [MB] (22 MBps) [2024-12-11T13:26:10.790Z] Copying: 459/1024 [MB] (22 MBps) [2024-12-11T13:26:11.728Z] Copying: 482/1024 [MB] (22 MBps) [2024-12-11T13:26:12.665Z] Copying: 505/1024 [MB] (23 MBps) [2024-12-11T13:26:13.603Z] Copying: 527/1024 [MB] (21 MBps) [2024-12-11T13:26:14.541Z] Copying: 550/1024 [MB] (22 MBps) [2024-12-11T13:26:15.920Z] Copying: 572/1024 [MB] (22 MBps) [2024-12-11T13:26:16.858Z] Copying: 595/1024 [MB] (22 MBps) [2024-12-11T13:26:17.795Z] Copying: 618/1024 [MB] (22 MBps) [2024-12-11T13:26:18.734Z] Copying: 641/1024 [MB] (22 MBps) [2024-12-11T13:26:19.672Z] Copying: 663/1024 [MB] (22 MBps) [2024-12-11T13:26:20.610Z] Copying: 687/1024 [MB] (23 MBps) [2024-12-11T13:26:21.546Z] Copying: 710/1024 [MB] (23 MBps) [2024-12-11T13:26:22.926Z] Copying: 733/1024 [MB] (22 MBps) [2024-12-11T13:26:23.863Z] Copying: 756/1024 [MB] (22 MBps) [2024-12-11T13:26:24.801Z] Copying: 778/1024 [MB] (22 MBps) [2024-12-11T13:26:25.763Z] Copying: 800/1024 [MB] (22 MBps) [2024-12-11T13:26:26.712Z] Copying: 823/1024 [MB] (22 MBps) [2024-12-11T13:26:27.648Z] Copying: 848/1024 [MB] (25 MBps) [2024-12-11T13:26:28.585Z] Copying: 873/1024 [MB] (24 MBps) [2024-12-11T13:26:29.522Z] Copying: 898/1024 [MB] (24 MBps) [2024-12-11T13:26:30.900Z] Copying: 922/1024 [MB] (24 MBps) [2024-12-11T13:26:31.845Z] Copying: 947/1024 [MB] (24 MBps) [2024-12-11T13:26:32.782Z] Copying: 971/1024 [MB] (24 MBps) [2024-12-11T13:26:33.720Z] Copying: 996/1024 [MB] (24 MBps) [2024-12-11T13:26:34.658Z] Copying: 1020/1024 [MB] (24 MBps) [2024-12-11T13:26:34.658Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-11 13:26:34.387222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.090 [2024-12-11 13:26:34.387300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:43.090 [2024-12-11 13:26:34.387319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:43.090 [2024-12-11 13:26:34.387332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.090 [2024-12-11 13:26:34.390964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:43.090 [2024-12-11 13:26:34.396815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.090 [2024-12-11 13:26:34.396850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:43.090 [2024-12-11 13:26:34.396864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.794 ms 00:28:43.090 [2024-12-11 13:26:34.396882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.090 [2024-12-11 13:26:34.406141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.090 [2024-12-11 13:26:34.406180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:43.090 [2024-12-11 13:26:34.406195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.251 ms 00:28:43.090 [2024-12-11 13:26:34.406223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.090 [2024-12-11 13:26:34.430652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.090 [2024-12-11 13:26:34.430691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:43.090 [2024-12-11 13:26:34.430707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.449 ms 00:28:43.090 [2024-12-11 13:26:34.430719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.090 [2024-12-11 13:26:34.435641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.090 [2024-12-11 13:26:34.435672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:43.091 [2024-12-11 13:26:34.435685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:28:43.091 [2024-12-11 13:26:34.435695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.091 [2024-12-11 13:26:34.471510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.091 [2024-12-11 13:26:34.471548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:43.091 [2024-12-11 13:26:34.471561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.823 ms 00:28:43.091 [2024-12-11 13:26:34.471572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.091 [2024-12-11 13:26:34.491754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.091 [2024-12-11 13:26:34.491790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:43.091 [2024-12-11 13:26:34.491803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.177 ms 00:28:43.091 [2024-12-11 13:26:34.491814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.091 [2024-12-11 13:26:34.612900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.091 [2024-12-11 13:26:34.612941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:43.091 [2024-12-11 13:26:34.612964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.240 ms 00:28:43.091 [2024-12-11 13:26:34.612975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.091 [2024-12-11 13:26:34.650034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.091 [2024-12-11 13:26:34.650072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:43.091 [2024-12-11 13:26:34.650087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.100 ms 00:28:43.091 [2024-12-11 13:26:34.650124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.351 [2024-12-11 13:26:34.686047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.351 [2024-12-11 13:26:34.686085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:43.351 [2024-12-11 13:26:34.686100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.939 ms 00:28:43.351 [2024-12-11 13:26:34.686111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.351 [2024-12-11 13:26:34.721550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.351 [2024-12-11 13:26:34.721585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:43.351 [2024-12-11 13:26:34.721600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.429 ms 00:28:43.351 [2024-12-11 13:26:34.721627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.351 [2024-12-11 13:26:34.755804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.351 [2024-12-11 13:26:34.755841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:43.351 [2024-12-11 13:26:34.755854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.151 ms 00:28:43.351 [2024-12-11 13:26:34.755864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.351 [2024-12-11 13:26:34.755901] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:43.351 [2024-12-11 13:26:34.755918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108032 / 261120 wr_cnt: 1 state: open 00:28:43.351 [2024-12-11 13:26:34.755933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.755945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.755957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.755969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.755980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.755991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:43.351 [2024-12-11 13:26:34.756071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.756997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:43.352 [2024-12-11 13:26:34.757149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:43.352 [2024-12-11 13:26:34.757160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc0ae49a-7b70-485f-90a1-9e0399327912 00:28:43.352 [2024-12-11 13:26:34.757191] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108032 00:28:43.352 [2024-12-11 13:26:34.757202] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108992 00:28:43.352 [2024-12-11 13:26:34.757212] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108032 00:28:43.352 [2024-12-11 13:26:34.757223] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:28:43.353 [2024-12-11 13:26:34.757234] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:43.353 [2024-12-11 13:26:34.757245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:43.353 [2024-12-11 13:26:34.757255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:43.353 [2024-12-11 13:26:34.757264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:43.353 [2024-12-11 13:26:34.757279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:43.353 [2024-12-11 13:26:34.757290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.353 [2024-12-11 13:26:34.757303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:43.353 [2024-12-11 13:26:34.757315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:28:43.353 [2024-12-11 13:26:34.757325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.353 [2024-12-11 13:26:34.778013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.353 [2024-12-11 13:26:34.778185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:43.353 [2024-12-11 13:26:34.778207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.684 ms 00:28:43.353 [2024-12-11 13:26:34.778219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.353 [2024-12-11 13:26:34.778879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.353 [2024-12-11 13:26:34.778898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:43.353 [2024-12-11 13:26:34.778911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:28:43.353 [2024-12-11 13:26:34.778930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.353 [2024-12-11 13:26:34.834067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.353 [2024-12-11 13:26:34.834236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:43.353 [2024-12-11 13:26:34.834260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.353 [2024-12-11 13:26:34.834272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.353 [2024-12-11 13:26:34.834340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.353 [2024-12-11 13:26:34.834352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:43.353 [2024-12-11 13:26:34.834364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.353 [2024-12-11 13:26:34.834382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.353 [2024-12-11 13:26:34.834454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.353 [2024-12-11 13:26:34.834469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:43.353 [2024-12-11 13:26:34.834481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.353 [2024-12-11 13:26:34.834491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.353 [2024-12-11 13:26:34.834511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.353 [2024-12-11 13:26:34.834522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:43.353 [2024-12-11 13:26:34.834534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.353 [2024-12-11 13:26:34.834544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.612 [2024-12-11 13:26:34.966321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.612 [2024-12-11 13:26:34.966387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:43.612 [2024-12-11 13:26:34.966404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.612 [2024-12-11 13:26:34.966416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.612 [2024-12-11 13:26:35.068486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.612 [2024-12-11 13:26:35.068545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:43.612 [2024-12-11 13:26:35.068575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.612 [2024-12-11 13:26:35.068593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.612 [2024-12-11 13:26:35.068694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.612 [2024-12-11 13:26:35.068706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:43.612 [2024-12-11 13:26:35.068717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.612 [2024-12-11 13:26:35.068727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.612 [2024-12-11 13:26:35.068774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.612 [2024-12-11 13:26:35.068786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:43.612 [2024-12-11 13:26:35.068797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.612 [2024-12-11 13:26:35.068807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.612 [2024-12-11 13:26:35.068936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.612 [2024-12-11 13:26:35.068950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:43.613 [2024-12-11 13:26:35.068961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.613 [2024-12-11 13:26:35.068971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.613 [2024-12-11 13:26:35.069011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.613 [2024-12-11 13:26:35.069023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:43.613 [2024-12-11 13:26:35.069034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.613 [2024-12-11 13:26:35.069045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.613 [2024-12-11 13:26:35.069099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.613 [2024-12-11 13:26:35.069111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:43.613 [2024-12-11 13:26:35.069148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.613 [2024-12-11 13:26:35.069175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.613 [2024-12-11 13:26:35.069229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.613 [2024-12-11 13:26:35.069241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:43.613 [2024-12-11 13:26:35.069253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.613 [2024-12-11 13:26:35.069263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.613 [2024-12-11 13:26:35.069409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 684.526 ms, result 0 00:28:45.518 00:28:45.518 00:28:45.518 13:26:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:46.897 13:26:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:46.897 [2024-12-11 13:26:38.389361] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:28:46.897 [2024-12-11 13:26:38.389495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84013 ] 00:28:47.157 [2024-12-11 13:26:38.573321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.157 [2024-12-11 13:26:38.706031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.725 [2024-12-11 13:26:39.122543] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.725 [2024-12-11 13:26:39.122627] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.725 [2024-12-11 13:26:39.288524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.725 [2024-12-11 13:26:39.288783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.725 [2024-12-11 13:26:39.288810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:47.725 [2024-12-11 13:26:39.288822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.725 [2024-12-11 13:26:39.288891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.725 [2024-12-11 13:26:39.288908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.725 [2024-12-11 13:26:39.288920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:47.725 [2024-12-11 13:26:39.288930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.725 [2024-12-11 13:26:39.288955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.725 [2024-12-11 13:26:39.289835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.725 [2024-12-11 13:26:39.289864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.725 [2024-12-11 13:26:39.289877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.725 [2024-12-11 13:26:39.289888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:28:47.725 [2024-12-11 13:26:39.289899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.292358] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:47.986 [2024-12-11 13:26:39.312159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.312201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:47.986 [2024-12-11 13:26:39.312215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.834 ms 00:28:47.986 [2024-12-11 13:26:39.312242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.312345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.312362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:47.986 [2024-12-11 13:26:39.312373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:47.986 [2024-12-11 13:26:39.312383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.324656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.324687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.986 [2024-12-11 13:26:39.324700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.221 ms 00:28:47.986 [2024-12-11 13:26:39.324715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.324802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.324814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.986 [2024-12-11 13:26:39.324826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:47.986 [2024-12-11 13:26:39.324836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.324891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.324903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.986 [2024-12-11 13:26:39.324914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:47.986 [2024-12-11 13:26:39.324924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.324954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.986 [2024-12-11 13:26:39.330521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.330662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.986 [2024-12-11 13:26:39.330706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:28:47.986 [2024-12-11 13:26:39.330718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.330760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.330773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.986 [2024-12-11 13:26:39.330785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:47.986 [2024-12-11 13:26:39.330795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.330835] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:47.986 [2024-12-11 13:26:39.330864] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:47.986 [2024-12-11 13:26:39.330902] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:47.986 [2024-12-11 13:26:39.330925] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:47.986 [2024-12-11 13:26:39.331020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.986 [2024-12-11 13:26:39.331034] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.986 [2024-12-11 13:26:39.331048] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:47.986 [2024-12-11 13:26:39.331061] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331074] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.986 [2024-12-11 13:26:39.331108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.986 [2024-12-11 13:26:39.331136] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.986 [2024-12-11 13:26:39.331151] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.986 [2024-12-11 13:26:39.331163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.331173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.986 [2024-12-11 13:26:39.331184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:28:47.986 [2024-12-11 13:26:39.331195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.331268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.986 [2024-12-11 13:26:39.331280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.986 [2024-12-11 13:26:39.331291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:47.986 [2024-12-11 13:26:39.331301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.986 [2024-12-11 13:26:39.331395] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.986 [2024-12-11 13:26:39.331409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.986 [2024-12-11 13:26:39.331420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.986 [2024-12-11 13:26:39.331451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.986 [2024-12-11 13:26:39.331480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.986 [2024-12-11 13:26:39.331499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.986 [2024-12-11 13:26:39.331511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.986 [2024-12-11 13:26:39.331520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.986 [2024-12-11 13:26:39.331541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.986 [2024-12-11 13:26:39.331551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.986 [2024-12-11 13:26:39.331561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.986 [2024-12-11 13:26:39.331580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.986 [2024-12-11 13:26:39.331609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.986 [2024-12-11 13:26:39.331638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.986 [2024-12-11 13:26:39.331667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.986 [2024-12-11 13:26:39.331695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.986 [2024-12-11 13:26:39.331714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.986 [2024-12-11 13:26:39.331723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.986 [2024-12-11 13:26:39.331733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.986 [2024-12-11 13:26:39.331742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.987 [2024-12-11 13:26:39.331751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.987 [2024-12-11 13:26:39.331761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.987 [2024-12-11 13:26:39.331771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.987 [2024-12-11 13:26:39.331781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.987 [2024-12-11 13:26:39.331789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.987 [2024-12-11 13:26:39.331798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.987 [2024-12-11 13:26:39.331807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.987 [2024-12-11 13:26:39.331816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.987 [2024-12-11 13:26:39.331826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.987 [2024-12-11 13:26:39.331837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.987 [2024-12-11 13:26:39.331846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.987 [2024-12-11 13:26:39.331856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.987 [2024-12-11 13:26:39.331867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.987 [2024-12-11 13:26:39.331877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.987 [2024-12-11 13:26:39.331887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.987 [2024-12-11 13:26:39.331896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.987 [2024-12-11 13:26:39.331906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.987 [2024-12-11 13:26:39.331916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.987 [2024-12-11 13:26:39.331927] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.987 [2024-12-11 13:26:39.331940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.331958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.987 [2024-12-11 13:26:39.331969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.987 [2024-12-11 13:26:39.331980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.987 [2024-12-11 13:26:39.331991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.987 [2024-12-11 13:26:39.332002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.987 [2024-12-11 13:26:39.332013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.987 [2024-12-11 13:26:39.332024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.987 [2024-12-11 13:26:39.332034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.987 [2024-12-11 13:26:39.332045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.987 [2024-12-11 13:26:39.332055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.332066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.332075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.332086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.332096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.987 [2024-12-11 13:26:39.332107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.987 [2024-12-11 13:26:39.332131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.332143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.987 [2024-12-11 13:26:39.332154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.987 [2024-12-11 13:26:39.332164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.987 [2024-12-11 13:26:39.332175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.987 [2024-12-11 13:26:39.332187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.332198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.987 [2024-12-11 13:26:39.332209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:28:47.987 [2024-12-11 13:26:39.332219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.380944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.380983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:47.987 [2024-12-11 13:26:39.380997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.751 ms 00:28:47.987 [2024-12-11 13:26:39.381013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.381090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.381103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:47.987 [2024-12-11 13:26:39.381128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:47.987 [2024-12-11 13:26:39.381139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.445429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.445469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:47.987 [2024-12-11 13:26:39.445482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.282 ms 00:28:47.987 [2024-12-11 13:26:39.445509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.445554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.445571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:47.987 [2024-12-11 13:26:39.445583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:47.987 [2024-12-11 13:26:39.445594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.446414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.446431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:47.987 [2024-12-11 13:26:39.446443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:28:47.987 [2024-12-11 13:26:39.446454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.446582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.446596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:47.987 [2024-12-11 13:26:39.446612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:47.987 [2024-12-11 13:26:39.446622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.469243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.469280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:47.987 [2024-12-11 13:26:39.469294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.634 ms 00:28:47.987 [2024-12-11 13:26:39.469321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.488288] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:47.987 [2024-12-11 13:26:39.488325] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:47.987 [2024-12-11 13:26:39.488341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.488351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:47.987 [2024-12-11 13:26:39.488363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.940 ms 00:28:47.987 [2024-12-11 13:26:39.488373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.516680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.516717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:47.987 [2024-12-11 13:26:39.516731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.308 ms 00:28:47.987 [2024-12-11 13:26:39.516741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.987 [2024-12-11 13:26:39.534608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.987 [2024-12-11 13:26:39.534644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:47.987 [2024-12-11 13:26:39.534657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.846 ms 00:28:47.987 [2024-12-11 13:26:39.534684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.552269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.552306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.247 [2024-12-11 13:26:39.552320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.574 ms 00:28:48.247 [2024-12-11 13:26:39.552330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.553082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.553106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.247 [2024-12-11 13:26:39.553137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:28:48.247 [2024-12-11 13:26:39.553148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.650864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.650926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.247 [2024-12-11 13:26:39.650969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.848 ms 00:28:48.247 [2024-12-11 13:26:39.650981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.662433] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.247 [2024-12-11 13:26:39.667049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.667081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.247 [2024-12-11 13:26:39.667097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.033 ms 00:28:48.247 [2024-12-11 13:26:39.667108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.667247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.667262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.247 [2024-12-11 13:26:39.667275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.247 [2024-12-11 13:26:39.667291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.669578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.669616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.247 [2024-12-11 13:26:39.669629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.241 ms 00:28:48.247 [2024-12-11 13:26:39.669641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.669686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.669699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.247 [2024-12-11 13:26:39.669710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:48.247 [2024-12-11 13:26:39.669721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.669771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.247 [2024-12-11 13:26:39.669784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.669796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.247 [2024-12-11 13:26:39.669808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:48.247 [2024-12-11 13:26:39.669818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.708587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.708747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.247 [2024-12-11 13:26:39.708778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.810 ms 00:28:48.247 [2024-12-11 13:26:39.708790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.708932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-12-11 13:26:39.708948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.247 [2024-12-11 13:26:39.708960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:48.247 [2024-12-11 13:26:39.708971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-12-11 13:26:39.712861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 423.716 ms, result 0 00:28:49.626  [2024-12-11T13:26:42.142Z] Copying: 1168/1048576 [kB] (1168 kBps) [2024-12-11T13:26:43.077Z] Copying: 10304/1048576 [kB] (9136 kBps) [2024-12-11T13:26:44.039Z] Copying: 42/1024 [MB] (32 MBps) [2024-12-11T13:26:44.986Z] Copying: 75/1024 [MB] (32 MBps) [2024-12-11T13:26:45.929Z] Copying: 107/1024 [MB] (32 MBps) [2024-12-11T13:26:47.307Z] Copying: 140/1024 [MB] (33 MBps) [2024-12-11T13:26:48.245Z] Copying: 174/1024 [MB] (33 MBps) [2024-12-11T13:26:49.183Z] Copying: 207/1024 [MB] (33 MBps) [2024-12-11T13:26:50.121Z] Copying: 240/1024 [MB] (33 MBps) [2024-12-11T13:26:51.058Z] Copying: 272/1024 [MB] (31 MBps) [2024-12-11T13:26:51.994Z] Copying: 304/1024 [MB] (31 MBps) [2024-12-11T13:26:52.930Z] Copying: 336/1024 [MB] (31 MBps) [2024-12-11T13:26:54.308Z] Copying: 368/1024 [MB] (32 MBps) [2024-12-11T13:26:55.246Z] Copying: 400/1024 [MB] (31 MBps) [2024-12-11T13:26:56.180Z] Copying: 432/1024 [MB] (31 MBps) [2024-12-11T13:26:57.117Z] Copying: 464/1024 [MB] (32 MBps) [2024-12-11T13:26:58.054Z] Copying: 497/1024 [MB] (33 MBps) [2024-12-11T13:26:58.991Z] Copying: 529/1024 [MB] (32 MBps) [2024-12-11T13:26:59.927Z] Copying: 562/1024 [MB] (32 MBps) [2024-12-11T13:27:01.304Z] Copying: 594/1024 [MB] (32 MBps) [2024-12-11T13:27:02.241Z] Copying: 626/1024 [MB] (32 MBps) [2024-12-11T13:27:03.178Z] Copying: 659/1024 [MB] (32 MBps) [2024-12-11T13:27:04.115Z] Copying: 691/1024 [MB] (32 MBps) [2024-12-11T13:27:05.052Z] Copying: 723/1024 [MB] (31 MBps) [2024-12-11T13:27:05.992Z] Copying: 755/1024 [MB] (32 MBps) [2024-12-11T13:27:06.930Z] Copying: 787/1024 [MB] (32 MBps) [2024-12-11T13:27:08.308Z] Copying: 819/1024 [MB] (31 MBps) [2024-12-11T13:27:08.929Z] Copying: 852/1024 [MB] (32 MBps) [2024-12-11T13:27:10.313Z] Copying: 884/1024 [MB] (32 MBps) [2024-12-11T13:27:11.251Z] Copying: 917/1024 [MB] (32 MBps) [2024-12-11T13:27:12.191Z] Copying: 950/1024 [MB] (33 MBps) [2024-12-11T13:27:13.126Z] Copying: 984/1024 [MB] (33 MBps) [2024-12-11T13:27:13.126Z] Copying: 1016/1024 [MB] (31 MBps) [2024-12-11T13:27:13.694Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-11 13:27:13.574301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.126 [2024-12-11 13:27:13.574400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:22.126 [2024-12-11 13:27:13.574420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:22.126 [2024-12-11 13:27:13.574433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.126 [2024-12-11 13:27:13.574465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:22.126 [2024-12-11 13:27:13.580551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.126 [2024-12-11 13:27:13.580598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:22.126 [2024-12-11 13:27:13.580613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.071 ms 00:29:22.126 [2024-12-11 13:27:13.580625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.126 [2024-12-11 13:27:13.580962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.126 [2024-12-11 13:27:13.580986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:22.126 [2024-12-11 13:27:13.580999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:29:22.126 [2024-12-11 13:27:13.581010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.126 [2024-12-11 13:27:13.593484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.126 [2024-12-11 13:27:13.593669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:22.126 [2024-12-11 13:27:13.593800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.469 ms 00:29:22.126 [2024-12-11 13:27:13.593843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.126 [2024-12-11 13:27:13.598798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.126 [2024-12-11 13:27:13.598952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:22.126 [2024-12-11 13:27:13.599070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:29:22.126 [2024-12-11 13:27:13.599087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.126 [2024-12-11 13:27:13.634972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.126 [2024-12-11 13:27:13.635009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:22.126 [2024-12-11 13:27:13.635023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.833 ms 00:29:22.127 [2024-12-11 13:27:13.635033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.127 [2024-12-11 13:27:13.654718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.127 [2024-12-11 13:27:13.654754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:22.127 [2024-12-11 13:27:13.654768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:29:22.127 [2024-12-11 13:27:13.654778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.127 [2024-12-11 13:27:13.657049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.127 [2024-12-11 13:27:13.657204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:22.127 [2024-12-11 13:27:13.657289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.223 ms 00:29:22.127 [2024-12-11 13:27:13.657334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.127 [2024-12-11 13:27:13.691791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.127 [2024-12-11 13:27:13.691939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:22.127 [2024-12-11 13:27:13.691960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.467 ms 00:29:22.127 [2024-12-11 13:27:13.691970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.387 [2024-12-11 13:27:13.726627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.387 [2024-12-11 13:27:13.726661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:22.387 [2024-12-11 13:27:13.726674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.658 ms 00:29:22.387 [2024-12-11 13:27:13.726683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.387 [2024-12-11 13:27:13.760680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.387 [2024-12-11 13:27:13.760717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:22.387 [2024-12-11 13:27:13.760731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.014 ms 00:29:22.387 [2024-12-11 13:27:13.760741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.387 [2024-12-11 13:27:13.796804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.387 [2024-12-11 13:27:13.796840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:22.387 [2024-12-11 13:27:13.796853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.040 ms 00:29:22.387 [2024-12-11 13:27:13.796864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.387 [2024-12-11 13:27:13.796904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:22.387 [2024-12-11 13:27:13.796923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:22.387 [2024-12-11 13:27:13.796937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:22.387 [2024-12-11 13:27:13.796949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.796961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.796972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.796984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.796995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:22.387 [2024-12-11 13:27:13.797213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.797991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.798002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.798013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.798024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.798035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:22.388 [2024-12-11 13:27:13.798054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:22.388 [2024-12-11 13:27:13.798064] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc0ae49a-7b70-485f-90a1-9e0399327912 00:29:22.388 [2024-12-11 13:27:13.798075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:22.388 [2024-12-11 13:27:13.798086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156608 00:29:22.388 [2024-12-11 13:27:13.798100] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154624 00:29:22.388 [2024-12-11 13:27:13.798112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:29:22.388 [2024-12-11 13:27:13.798130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:22.388 [2024-12-11 13:27:13.798154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:22.388 [2024-12-11 13:27:13.798165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:22.388 [2024-12-11 13:27:13.798174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:22.388 [2024-12-11 13:27:13.798183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:22.388 [2024-12-11 13:27:13.798193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.388 [2024-12-11 13:27:13.798204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:22.388 [2024-12-11 13:27:13.798215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:29:22.388 [2024-12-11 13:27:13.798225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.388 [2024-12-11 13:27:13.819155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.388 [2024-12-11 13:27:13.819302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:22.388 [2024-12-11 13:27:13.819322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.927 ms 00:29:22.388 [2024-12-11 13:27:13.819333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.388 [2024-12-11 13:27:13.819986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.388 [2024-12-11 13:27:13.820001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:22.389 [2024-12-11 13:27:13.820013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:29:22.389 [2024-12-11 13:27:13.820023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.389 [2024-12-11 13:27:13.872331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.389 [2024-12-11 13:27:13.872367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:22.389 [2024-12-11 13:27:13.872380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.389 [2024-12-11 13:27:13.872391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.389 [2024-12-11 13:27:13.872462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.389 [2024-12-11 13:27:13.872473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:22.389 [2024-12-11 13:27:13.872484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.389 [2024-12-11 13:27:13.872494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.389 [2024-12-11 13:27:13.872565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.389 [2024-12-11 13:27:13.872585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:22.389 [2024-12-11 13:27:13.872595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.389 [2024-12-11 13:27:13.872604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.389 [2024-12-11 13:27:13.872622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.389 [2024-12-11 13:27:13.872634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:22.389 [2024-12-11 13:27:13.872644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.389 [2024-12-11 13:27:13.872654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.001876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.001938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:22.649 [2024-12-11 13:27:14.001955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.001982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:22.649 [2024-12-11 13:27:14.103285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:22.649 [2024-12-11 13:27:14.103444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:22.649 [2024-12-11 13:27:14.103547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:22.649 [2024-12-11 13:27:14.103713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:22.649 [2024-12-11 13:27:14.103791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:22.649 [2024-12-11 13:27:14.103878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.103940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.649 [2024-12-11 13:27:14.103953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:22.649 [2024-12-11 13:27:14.103964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.649 [2024-12-11 13:27:14.103974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.649 [2024-12-11 13:27:14.104124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.651 ms, result 0 00:29:24.044 00:29:24.044 00:29:24.044 13:27:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:25.423 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:25.423 13:27:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:25.682 [2024-12-11 13:27:17.035831] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:29:25.683 [2024-12-11 13:27:17.035973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84406 ] 00:29:25.683 [2024-12-11 13:27:17.220826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.941 [2024-12-11 13:27:17.352254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.200 [2024-12-11 13:27:17.765887] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:26.200 [2024-12-11 13:27:17.765960] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:26.461 [2024-12-11 13:27:17.932957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.933013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:26.461 [2024-12-11 13:27:17.933030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:26.461 [2024-12-11 13:27:17.933041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.933094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.933111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:26.461 [2024-12-11 13:27:17.933136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:26.461 [2024-12-11 13:27:17.933147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.933170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:26.461 [2024-12-11 13:27:17.934210] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:26.461 [2024-12-11 13:27:17.934249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.934261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:26.461 [2024-12-11 13:27:17.934274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:29:26.461 [2024-12-11 13:27:17.934285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.936650] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:26.461 [2024-12-11 13:27:17.957004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.957046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:26.461 [2024-12-11 13:27:17.957061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.389 ms 00:29:26.461 [2024-12-11 13:27:17.957088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.957174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.957189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:26.461 [2024-12-11 13:27:17.957201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:26.461 [2024-12-11 13:27:17.957212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.969673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.969703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:26.461 [2024-12-11 13:27:17.969718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.405 ms 00:29:26.461 [2024-12-11 13:27:17.969734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.969825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.969840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:26.461 [2024-12-11 13:27:17.969852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:26.461 [2024-12-11 13:27:17.969862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.969921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.969934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:26.461 [2024-12-11 13:27:17.969945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:26.461 [2024-12-11 13:27:17.969956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.969989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:26.461 [2024-12-11 13:27:17.975806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.975839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:26.461 [2024-12-11 13:27:17.975855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.834 ms 00:29:26.461 [2024-12-11 13:27:17.975882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.975919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.461 [2024-12-11 13:27:17.975931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:26.461 [2024-12-11 13:27:17.975942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:26.461 [2024-12-11 13:27:17.975953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.461 [2024-12-11 13:27:17.975994] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:26.461 [2024-12-11 13:27:17.976024] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:26.461 [2024-12-11 13:27:17.976062] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:26.461 [2024-12-11 13:27:17.976086] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:26.461 [2024-12-11 13:27:17.976188] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:26.461 [2024-12-11 13:27:17.976203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:26.461 [2024-12-11 13:27:17.976217] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:26.461 [2024-12-11 13:27:17.976230] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:26.461 [2024-12-11 13:27:17.976243] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976255] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:26.462 [2024-12-11 13:27:17.976266] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:26.462 [2024-12-11 13:27:17.976276] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:26.462 [2024-12-11 13:27:17.976291] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:26.462 [2024-12-11 13:27:17.976303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.462 [2024-12-11 13:27:17.976315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:26.462 [2024-12-11 13:27:17.976325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:29:26.462 [2024-12-11 13:27:17.976336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.462 [2024-12-11 13:27:17.976408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.462 [2024-12-11 13:27:17.976420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:26.462 [2024-12-11 13:27:17.976430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:26.462 [2024-12-11 13:27:17.976441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.462 [2024-12-11 13:27:17.976532] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:26.462 [2024-12-11 13:27:17.976546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:26.462 [2024-12-11 13:27:17.976558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:26.462 [2024-12-11 13:27:17.976591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:26.462 [2024-12-11 13:27:17.976620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:26.462 [2024-12-11 13:27:17.976640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:26.462 [2024-12-11 13:27:17.976651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:26.462 [2024-12-11 13:27:17.976661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:26.462 [2024-12-11 13:27:17.976682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:26.462 [2024-12-11 13:27:17.976692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:26.462 [2024-12-11 13:27:17.976702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:26.462 [2024-12-11 13:27:17.976721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:26.462 [2024-12-11 13:27:17.976749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:26.462 [2024-12-11 13:27:17.976777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:26.462 [2024-12-11 13:27:17.976804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:26.462 [2024-12-11 13:27:17.976831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:26.462 [2024-12-11 13:27:17.976858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:26.462 [2024-12-11 13:27:17.976876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:26.462 [2024-12-11 13:27:17.976885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:26.462 [2024-12-11 13:27:17.976894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:26.462 [2024-12-11 13:27:17.976903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:26.462 [2024-12-11 13:27:17.976912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:26.462 [2024-12-11 13:27:17.976922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:26.462 [2024-12-11 13:27:17.976940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:26.462 [2024-12-11 13:27:17.976949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.976960] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:26.462 [2024-12-11 13:27:17.976971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:26.462 [2024-12-11 13:27:17.976980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:26.462 [2024-12-11 13:27:17.976990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:26.462 [2024-12-11 13:27:17.977001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:26.462 [2024-12-11 13:27:17.977011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:26.462 [2024-12-11 13:27:17.977021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:26.462 [2024-12-11 13:27:17.977030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:26.462 [2024-12-11 13:27:17.977040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:26.462 [2024-12-11 13:27:17.977050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:26.462 [2024-12-11 13:27:17.977061] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:26.462 [2024-12-11 13:27:17.977074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:26.462 [2024-12-11 13:27:17.977101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:26.462 [2024-12-11 13:27:17.977122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:26.462 [2024-12-11 13:27:17.977134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:26.462 [2024-12-11 13:27:17.977146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:26.462 [2024-12-11 13:27:17.977162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:26.462 [2024-12-11 13:27:17.977174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:26.462 [2024-12-11 13:27:17.977185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:26.462 [2024-12-11 13:27:17.977196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:26.462 [2024-12-11 13:27:17.977207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:26.462 [2024-12-11 13:27:17.977276] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:26.462 [2024-12-11 13:27:17.977288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:26.462 [2024-12-11 13:27:17.977310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:26.462 [2024-12-11 13:27:17.977320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:26.462 [2024-12-11 13:27:17.977331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:26.462 [2024-12-11 13:27:17.977342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.462 [2024-12-11 13:27:17.977364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:26.462 [2024-12-11 13:27:17.977376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:29:26.462 [2024-12-11 13:27:17.977386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.027823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.027865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:26.722 [2024-12-11 13:27:18.027880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.462 ms 00:29:26.722 [2024-12-11 13:27:18.027897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.027980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.027993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:26.722 [2024-12-11 13:27:18.028004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:26.722 [2024-12-11 13:27:18.028016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.091660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.091701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:26.722 [2024-12-11 13:27:18.091716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.658 ms 00:29:26.722 [2024-12-11 13:27:18.091727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.091766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.091783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:26.722 [2024-12-11 13:27:18.091795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:26.722 [2024-12-11 13:27:18.091805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.092651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.092672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:26.722 [2024-12-11 13:27:18.092684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:29:26.722 [2024-12-11 13:27:18.092695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.092828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.092843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:26.722 [2024-12-11 13:27:18.092859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:29:26.722 [2024-12-11 13:27:18.092869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.114566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.114605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:26.722 [2024-12-11 13:27:18.114619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.710 ms 00:29:26.722 [2024-12-11 13:27:18.114646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.133781] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:26.722 [2024-12-11 13:27:18.133819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:26.722 [2024-12-11 13:27:18.133834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.133862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:26.722 [2024-12-11 13:27:18.133874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.099 ms 00:29:26.722 [2024-12-11 13:27:18.133885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.162783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.162825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:26.722 [2024-12-11 13:27:18.162839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.897 ms 00:29:26.722 [2024-12-11 13:27:18.162850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.180661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.180698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:26.722 [2024-12-11 13:27:18.180711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.774 ms 00:29:26.722 [2024-12-11 13:27:18.180722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.197985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.198022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:26.722 [2024-12-11 13:27:18.198035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.250 ms 00:29:26.722 [2024-12-11 13:27:18.198045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.722 [2024-12-11 13:27:18.198814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.722 [2024-12-11 13:27:18.198848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:26.722 [2024-12-11 13:27:18.198866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:29:26.722 [2024-12-11 13:27:18.198877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.981 [2024-12-11 13:27:18.292676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.981 [2024-12-11 13:27:18.292741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:26.981 [2024-12-11 13:27:18.292777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.925 ms 00:29:26.981 [2024-12-11 13:27:18.292805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.981 [2024-12-11 13:27:18.303311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:26.981 [2024-12-11 13:27:18.306582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.981 [2024-12-11 13:27:18.306612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:26.981 [2024-12-11 13:27:18.306627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.737 ms 00:29:26.981 [2024-12-11 13:27:18.306655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.981 [2024-12-11 13:27:18.306777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.981 [2024-12-11 13:27:18.306792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:26.981 [2024-12-11 13:27:18.306804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:26.981 [2024-12-11 13:27:18.306820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.981 [2024-12-11 13:27:18.308246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.981 [2024-12-11 13:27:18.308273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:26.981 [2024-12-11 13:27:18.308285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:29:26.981 [2024-12-11 13:27:18.308296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.981 [2024-12-11 13:27:18.308322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.981 [2024-12-11 13:27:18.308334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:26.981 [2024-12-11 13:27:18.308346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:26.981 [2024-12-11 13:27:18.308358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.982 [2024-12-11 13:27:18.308405] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:26.982 [2024-12-11 13:27:18.308419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.982 [2024-12-11 13:27:18.308432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:26.982 [2024-12-11 13:27:18.308444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:26.982 [2024-12-11 13:27:18.308454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.982 [2024-12-11 13:27:18.343981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.982 [2024-12-11 13:27:18.344018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:26.982 [2024-12-11 13:27:18.344039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.562 ms 00:29:26.982 [2024-12-11 13:27:18.344050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.982 [2024-12-11 13:27:18.344134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.982 [2024-12-11 13:27:18.344147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:26.982 [2024-12-11 13:27:18.344159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:26.982 [2024-12-11 13:27:18.344170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.982 [2024-12-11 13:27:18.345741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.907 ms, result 0 00:29:28.360  [2024-12-11T13:27:20.865Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-11T13:27:21.802Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-11T13:27:22.739Z] Copying: 75/1024 [MB] (24 MBps) [2024-12-11T13:27:23.676Z] Copying: 100/1024 [MB] (25 MBps) [2024-12-11T13:27:24.613Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-11T13:27:25.549Z] Copying: 151/1024 [MB] (25 MBps) [2024-12-11T13:27:26.927Z] Copying: 176/1024 [MB] (24 MBps) [2024-12-11T13:27:27.864Z] Copying: 200/1024 [MB] (24 MBps) [2024-12-11T13:27:28.801Z] Copying: 226/1024 [MB] (25 MBps) [2024-12-11T13:27:29.745Z] Copying: 252/1024 [MB] (26 MBps) [2024-12-11T13:27:30.709Z] Copying: 278/1024 [MB] (25 MBps) [2024-12-11T13:27:31.645Z] Copying: 303/1024 [MB] (25 MBps) [2024-12-11T13:27:32.581Z] Copying: 328/1024 [MB] (24 MBps) [2024-12-11T13:27:33.959Z] Copying: 352/1024 [MB] (23 MBps) [2024-12-11T13:27:34.895Z] Copying: 377/1024 [MB] (25 MBps) [2024-12-11T13:27:35.837Z] Copying: 401/1024 [MB] (24 MBps) [2024-12-11T13:27:36.775Z] Copying: 426/1024 [MB] (24 MBps) [2024-12-11T13:27:37.710Z] Copying: 451/1024 [MB] (24 MBps) [2024-12-11T13:27:38.646Z] Copying: 476/1024 [MB] (24 MBps) [2024-12-11T13:27:39.583Z] Copying: 501/1024 [MB] (24 MBps) [2024-12-11T13:27:40.960Z] Copying: 525/1024 [MB] (23 MBps) [2024-12-11T13:27:41.527Z] Copying: 549/1024 [MB] (24 MBps) [2024-12-11T13:27:42.905Z] Copying: 574/1024 [MB] (24 MBps) [2024-12-11T13:27:43.841Z] Copying: 599/1024 [MB] (25 MBps) [2024-12-11T13:27:44.778Z] Copying: 625/1024 [MB] (25 MBps) [2024-12-11T13:27:45.713Z] Copying: 649/1024 [MB] (24 MBps) [2024-12-11T13:27:46.649Z] Copying: 674/1024 [MB] (24 MBps) [2024-12-11T13:27:47.587Z] Copying: 699/1024 [MB] (24 MBps) [2024-12-11T13:27:48.523Z] Copying: 723/1024 [MB] (24 MBps) [2024-12-11T13:27:49.901Z] Copying: 748/1024 [MB] (24 MBps) [2024-12-11T13:27:50.838Z] Copying: 772/1024 [MB] (24 MBps) [2024-12-11T13:27:51.775Z] Copying: 796/1024 [MB] (24 MBps) [2024-12-11T13:27:52.713Z] Copying: 820/1024 [MB] (23 MBps) [2024-12-11T13:27:53.715Z] Copying: 844/1024 [MB] (24 MBps) [2024-12-11T13:27:54.652Z] Copying: 869/1024 [MB] (24 MBps) [2024-12-11T13:27:55.597Z] Copying: 893/1024 [MB] (24 MBps) [2024-12-11T13:27:56.533Z] Copying: 917/1024 [MB] (24 MBps) [2024-12-11T13:27:57.911Z] Copying: 941/1024 [MB] (23 MBps) [2024-12-11T13:27:58.848Z] Copying: 966/1024 [MB] (24 MBps) [2024-12-11T13:27:59.785Z] Copying: 989/1024 [MB] (23 MBps) [2024-12-11T13:28:00.044Z] Copying: 1013/1024 [MB] (23 MBps) [2024-12-11T13:28:00.044Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-11 13:27:59.996608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.476 [2024-12-11 13:27:59.996745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:08.476 [2024-12-11 13:27:59.996795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:08.476 [2024-12-11 13:27:59.996831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.476 [2024-12-11 13:27:59.996903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:08.476 [2024-12-11 13:28:00.010914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.476 [2024-12-11 13:28:00.010997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:08.476 [2024-12-11 13:28:00.011026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:30:08.476 [2024-12-11 13:28:00.011048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.476 [2024-12-11 13:28:00.011525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.476 [2024-12-11 13:28:00.011569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:08.476 [2024-12-11 13:28:00.011592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:30:08.476 [2024-12-11 13:28:00.011614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.476 [2024-12-11 13:28:00.017533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.476 [2024-12-11 13:28:00.017579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:08.476 [2024-12-11 13:28:00.017606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:30:08.476 [2024-12-11 13:28:00.017639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.476 [2024-12-11 13:28:00.025688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.476 [2024-12-11 13:28:00.025735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:08.476 [2024-12-11 13:28:00.025753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.025 ms 00:30:08.476 [2024-12-11 13:28:00.025769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.065603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.065649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:08.737 [2024-12-11 13:28:00.065666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.821 ms 00:30:08.737 [2024-12-11 13:28:00.065678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.087168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.087209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:08.737 [2024-12-11 13:28:00.087240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.478 ms 00:30:08.737 [2024-12-11 13:28:00.087251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.089439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.089480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:08.737 [2024-12-11 13:28:00.089494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:30:08.737 [2024-12-11 13:28:00.089506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.124798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.124835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:08.737 [2024-12-11 13:28:00.124848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.321 ms 00:30:08.737 [2024-12-11 13:28:00.124858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.159667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.159715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:08.737 [2024-12-11 13:28:00.159728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.827 ms 00:30:08.737 [2024-12-11 13:28:00.159738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.193015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.193051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:08.737 [2024-12-11 13:28:00.193063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.294 ms 00:30:08.737 [2024-12-11 13:28:00.193072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.226754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.737 [2024-12-11 13:28:00.226790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:08.737 [2024-12-11 13:28:00.226803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.652 ms 00:30:08.737 [2024-12-11 13:28:00.226812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.737 [2024-12-11 13:28:00.226849] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:08.737 [2024-12-11 13:28:00.226873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:08.737 [2024-12-11 13:28:00.226891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:08.737 [2024-12-11 13:28:00.226902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.226996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:08.737 [2024-12-11 13:28:00.227579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:08.738 [2024-12-11 13:28:00.227984] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:08.738 [2024-12-11 13:28:00.227995] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc0ae49a-7b70-485f-90a1-9e0399327912 00:30:08.738 [2024-12-11 13:28:00.228006] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:08.738 [2024-12-11 13:28:00.228017] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:08.738 [2024-12-11 13:28:00.228027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:08.738 [2024-12-11 13:28:00.228038] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:08.738 [2024-12-11 13:28:00.228059] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:08.738 [2024-12-11 13:28:00.228069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:08.738 [2024-12-11 13:28:00.228079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:08.738 [2024-12-11 13:28:00.228089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:08.738 [2024-12-11 13:28:00.228098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:08.738 [2024-12-11 13:28:00.228108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.738 [2024-12-11 13:28:00.228128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:08.738 [2024-12-11 13:28:00.228140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:30:08.738 [2024-12-11 13:28:00.228154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.738 [2024-12-11 13:28:00.247385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.738 [2024-12-11 13:28:00.247420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:08.738 [2024-12-11 13:28:00.247432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.226 ms 00:30:08.738 [2024-12-11 13:28:00.247443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.738 [2024-12-11 13:28:00.248006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.738 [2024-12-11 13:28:00.248035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:08.738 [2024-12-11 13:28:00.248047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:30:08.738 [2024-12-11 13:28:00.248057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.738 [2024-12-11 13:28:00.301687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.738 [2024-12-11 13:28:00.301725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.738 [2024-12-11 13:28:00.301739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.738 [2024-12-11 13:28:00.301751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.998 [2024-12-11 13:28:00.301817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.998 [2024-12-11 13:28:00.301836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.998 [2024-12-11 13:28:00.301848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.998 [2024-12-11 13:28:00.301859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.998 [2024-12-11 13:28:00.301924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.998 [2024-12-11 13:28:00.301938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.998 [2024-12-11 13:28:00.301950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.998 [2024-12-11 13:28:00.301961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.998 [2024-12-11 13:28:00.301979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.998 [2024-12-11 13:28:00.301991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.998 [2024-12-11 13:28:00.302007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.998 [2024-12-11 13:28:00.302018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.998 [2024-12-11 13:28:00.430943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.998 [2024-12-11 13:28:00.431004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.998 [2024-12-11 13:28:00.431020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.998 [2024-12-11 13:28:00.431031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.998 [2024-12-11 13:28:00.530827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.998 [2024-12-11 13:28:00.530892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.998 [2024-12-11 13:28:00.530907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.998 [2024-12-11 13:28:00.530918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.998 [2024-12-11 13:28:00.531023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.998 [2024-12-11 13:28:00.531036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.999 [2024-12-11 13:28:00.531047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.999 [2024-12-11 13:28:00.531057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.999 [2024-12-11 13:28:00.531137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.999 [2024-12-11 13:28:00.531151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.999 [2024-12-11 13:28:00.531162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.999 [2024-12-11 13:28:00.531176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.999 [2024-12-11 13:28:00.531308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.999 [2024-12-11 13:28:00.531323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.999 [2024-12-11 13:28:00.531335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.999 [2024-12-11 13:28:00.531346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.999 [2024-12-11 13:28:00.531389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.999 [2024-12-11 13:28:00.531402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:08.999 [2024-12-11 13:28:00.531414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.999 [2024-12-11 13:28:00.531424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.999 [2024-12-11 13:28:00.531474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.999 [2024-12-11 13:28:00.531486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.999 [2024-12-11 13:28:00.531497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.999 [2024-12-11 13:28:00.531508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.999 [2024-12-11 13:28:00.531559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.999 [2024-12-11 13:28:00.531572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.999 [2024-12-11 13:28:00.531583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.999 [2024-12-11 13:28:00.531597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.999 [2024-12-11 13:28:00.531741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.019 ms, result 0 00:30:10.378 00:30:10.378 00:30:10.378 13:28:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:12.284 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82573 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82573 ']' 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82573 00:30:12.284 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82573) - No such process 00:30:12.284 Process with pid 82573 is not found 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82573 is not found' 00:30:12.284 13:28:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:12.543 Remove shared memory files 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:12.543 13:28:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:12.543 00:30:12.544 real 3m43.838s 00:30:12.544 user 4m10.290s 00:30:12.544 sys 0m40.675s 00:30:12.544 13:28:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.544 13:28:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:12.544 ************************************ 00:30:12.544 END TEST ftl_dirty_shutdown 00:30:12.544 ************************************ 00:30:12.544 13:28:04 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:12.544 13:28:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:12.544 13:28:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.544 13:28:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:12.803 ************************************ 00:30:12.803 START TEST ftl_upgrade_shutdown 00:30:12.803 ************************************ 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:12.803 * Looking for test storage... 00:30:12.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.803 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:12.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.804 --rc genhtml_branch_coverage=1 00:30:12.804 --rc genhtml_function_coverage=1 00:30:12.804 --rc genhtml_legend=1 00:30:12.804 --rc geninfo_all_blocks=1 00:30:12.804 --rc geninfo_unexecuted_blocks=1 00:30:12.804 00:30:12.804 ' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:12.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.804 --rc genhtml_branch_coverage=1 00:30:12.804 --rc genhtml_function_coverage=1 00:30:12.804 --rc genhtml_legend=1 00:30:12.804 --rc geninfo_all_blocks=1 00:30:12.804 --rc geninfo_unexecuted_blocks=1 00:30:12.804 00:30:12.804 ' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:12.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.804 --rc genhtml_branch_coverage=1 00:30:12.804 --rc genhtml_function_coverage=1 00:30:12.804 --rc genhtml_legend=1 00:30:12.804 --rc geninfo_all_blocks=1 00:30:12.804 --rc geninfo_unexecuted_blocks=1 00:30:12.804 00:30:12.804 ' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:12.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.804 --rc genhtml_branch_coverage=1 00:30:12.804 --rc genhtml_function_coverage=1 00:30:12.804 --rc genhtml_legend=1 00:30:12.804 --rc geninfo_all_blocks=1 00:30:12.804 --rc geninfo_unexecuted_blocks=1 00:30:12.804 00:30:12.804 ' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:12.804 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:13.063 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84939 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84939 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84939 ']' 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.064 13:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:13.064 [2024-12-11 13:28:04.493181] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:30:13.064 [2024-12-11 13:28:04.493326] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84939 ] 00:30:13.323 [2024-12-11 13:28:04.676762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.323 [2024-12-11 13:28:04.805299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:14.259 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:14.260 13:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:14.828 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:14.828 { 00:30:14.828 "name": "basen1", 00:30:14.828 "aliases": [ 00:30:14.828 "f7212c56-6749-494b-8fff-a548cae1239c" 00:30:14.828 ], 00:30:14.828 "product_name": "NVMe disk", 00:30:14.828 "block_size": 4096, 00:30:14.828 "num_blocks": 1310720, 00:30:14.828 "uuid": "f7212c56-6749-494b-8fff-a548cae1239c", 00:30:14.828 "numa_id": -1, 00:30:14.828 "assigned_rate_limits": { 00:30:14.828 "rw_ios_per_sec": 0, 00:30:14.828 "rw_mbytes_per_sec": 0, 00:30:14.828 "r_mbytes_per_sec": 0, 00:30:14.828 "w_mbytes_per_sec": 0 00:30:14.828 }, 00:30:14.828 "claimed": true, 00:30:14.828 "claim_type": "read_many_write_one", 00:30:14.828 "zoned": false, 00:30:14.828 "supported_io_types": { 00:30:14.828 "read": true, 00:30:14.828 "write": true, 00:30:14.828 "unmap": true, 00:30:14.828 "flush": true, 00:30:14.828 "reset": true, 00:30:14.828 "nvme_admin": true, 00:30:14.829 "nvme_io": true, 00:30:14.829 "nvme_io_md": false, 00:30:14.829 "write_zeroes": true, 00:30:14.829 "zcopy": false, 00:30:14.829 "get_zone_info": false, 00:30:14.829 "zone_management": false, 00:30:14.829 "zone_append": false, 00:30:14.829 "compare": true, 00:30:14.829 "compare_and_write": false, 00:30:14.829 "abort": true, 00:30:14.829 "seek_hole": false, 00:30:14.829 "seek_data": false, 00:30:14.829 "copy": true, 00:30:14.829 "nvme_iov_md": false 00:30:14.829 }, 00:30:14.829 "driver_specific": { 00:30:14.829 "nvme": [ 00:30:14.829 { 00:30:14.829 "pci_address": "0000:00:11.0", 00:30:14.829 "trid": { 00:30:14.829 "trtype": "PCIe", 00:30:14.829 "traddr": "0000:00:11.0" 00:30:14.829 }, 00:30:14.829 "ctrlr_data": { 00:30:14.829 "cntlid": 0, 00:30:14.829 "vendor_id": "0x1b36", 00:30:14.829 "model_number": "QEMU NVMe Ctrl", 00:30:14.829 "serial_number": "12341", 00:30:14.829 "firmware_revision": "8.0.0", 00:30:14.829 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:14.829 "oacs": { 00:30:14.829 "security": 0, 00:30:14.829 "format": 1, 00:30:14.829 "firmware": 0, 00:30:14.829 "ns_manage": 1 00:30:14.829 }, 00:30:14.829 "multi_ctrlr": false, 00:30:14.829 "ana_reporting": false 00:30:14.829 }, 00:30:14.829 "vs": { 00:30:14.829 "nvme_version": "1.4" 00:30:14.829 }, 00:30:14.829 "ns_data": { 00:30:14.829 "id": 1, 00:30:14.829 "can_share": false 00:30:14.829 } 00:30:14.829 } 00:30:14.829 ], 00:30:14.829 "mp_policy": "active_passive" 00:30:14.829 } 00:30:14.829 } 00:30:14.829 ]' 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:14.829 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:15.088 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=667d6507-0413-4d5b-a683-6be6a98979ad 00:30:15.088 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:15.088 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 667d6507-0413-4d5b-a683-6be6a98979ad 00:30:15.347 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:15.606 13:28:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=89acc053-34bb-4a68-8b9b-ed78bf9ff8f5 00:30:15.606 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 89acc053-34bb-4a68-8b9b-ed78bf9ff8f5 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=aed58910-7405-4fc6-a02a-f7f38fb4c640 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z aed58910-7405-4fc6-a02a-f7f38fb4c640 ]] 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 aed58910-7405-4fc6-a02a-f7f38fb4c640 5120 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=aed58910-7405-4fc6-a02a-f7f38fb4c640 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size aed58910-7405-4fc6-a02a-f7f38fb4c640 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=aed58910-7405-4fc6-a02a-f7f38fb4c640 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aed58910-7405-4fc6-a02a-f7f38fb4c640 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:15.866 { 00:30:15.866 "name": "aed58910-7405-4fc6-a02a-f7f38fb4c640", 00:30:15.866 "aliases": [ 00:30:15.866 "lvs/basen1p0" 00:30:15.866 ], 00:30:15.866 "product_name": "Logical Volume", 00:30:15.866 "block_size": 4096, 00:30:15.866 "num_blocks": 5242880, 00:30:15.866 "uuid": "aed58910-7405-4fc6-a02a-f7f38fb4c640", 00:30:15.866 "assigned_rate_limits": { 00:30:15.866 "rw_ios_per_sec": 0, 00:30:15.866 "rw_mbytes_per_sec": 0, 00:30:15.866 "r_mbytes_per_sec": 0, 00:30:15.866 "w_mbytes_per_sec": 0 00:30:15.866 }, 00:30:15.866 "claimed": false, 00:30:15.866 "zoned": false, 00:30:15.866 "supported_io_types": { 00:30:15.866 "read": true, 00:30:15.866 "write": true, 00:30:15.866 "unmap": true, 00:30:15.866 "flush": false, 00:30:15.866 "reset": true, 00:30:15.866 "nvme_admin": false, 00:30:15.866 "nvme_io": false, 00:30:15.866 "nvme_io_md": false, 00:30:15.866 "write_zeroes": true, 00:30:15.866 "zcopy": false, 00:30:15.866 "get_zone_info": false, 00:30:15.866 "zone_management": false, 00:30:15.866 "zone_append": false, 00:30:15.866 "compare": false, 00:30:15.866 "compare_and_write": false, 00:30:15.866 "abort": false, 00:30:15.866 "seek_hole": true, 00:30:15.866 "seek_data": true, 00:30:15.866 "copy": false, 00:30:15.866 "nvme_iov_md": false 00:30:15.866 }, 00:30:15.866 "driver_specific": { 00:30:15.866 "lvol": { 00:30:15.866 "lvol_store_uuid": "89acc053-34bb-4a68-8b9b-ed78bf9ff8f5", 00:30:15.866 "base_bdev": "basen1", 00:30:15.866 "thin_provision": true, 00:30:15.866 "num_allocated_clusters": 0, 00:30:15.866 "snapshot": false, 00:30:15.866 "clone": false, 00:30:15.866 "esnap_clone": false 00:30:15.866 } 00:30:15.866 } 00:30:15.866 } 00:30:15.866 ]' 00:30:15.866 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:16.125 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:16.384 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:16.384 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:16.384 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:16.644 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:16.644 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:16.644 13:28:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d aed58910-7405-4fc6-a02a-f7f38fb4c640 -c cachen1p0 --l2p_dram_limit 2 00:30:16.644 [2024-12-11 13:28:08.187172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.187230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:16.644 [2024-12-11 13:28:08.187266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.644 [2024-12-11 13:28:08.187278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.187357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.187370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:16.644 [2024-12-11 13:28:08.187384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:30:16.644 [2024-12-11 13:28:08.187395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.187419] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:16.644 [2024-12-11 13:28:08.188528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:16.644 [2024-12-11 13:28:08.188569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.188580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:16.644 [2024-12-11 13:28:08.188597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.152 ms 00:30:16.644 [2024-12-11 13:28:08.188608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.188690] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e6930e04-252c-40f4-a8eb-97a52b9652e9 00:30:16.644 [2024-12-11 13:28:08.191230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.191271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:16.644 [2024-12-11 13:28:08.191284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:16.644 [2024-12-11 13:28:08.191297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.204917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.205143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:16.644 [2024-12-11 13:28:08.205169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.580 ms 00:30:16.644 [2024-12-11 13:28:08.205183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.205244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.205261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:16.644 [2024-12-11 13:28:08.205273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:16.644 [2024-12-11 13:28:08.205291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.205366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.644 [2024-12-11 13:28:08.205384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:16.644 [2024-12-11 13:28:08.205395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:16.644 [2024-12-11 13:28:08.205416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.644 [2024-12-11 13:28:08.205445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:16.904 [2024-12-11 13:28:08.211729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.904 [2024-12-11 13:28:08.211776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:16.904 [2024-12-11 13:28:08.211792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.301 ms 00:30:16.904 [2024-12-11 13:28:08.211803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.904 [2024-12-11 13:28:08.211840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.904 [2024-12-11 13:28:08.211852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:16.904 [2024-12-11 13:28:08.211867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.904 [2024-12-11 13:28:08.211878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.904 [2024-12-11 13:28:08.211917] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:16.904 [2024-12-11 13:28:08.212071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:16.904 [2024-12-11 13:28:08.212094] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:16.904 [2024-12-11 13:28:08.212109] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:16.904 [2024-12-11 13:28:08.212148] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:16.904 [2024-12-11 13:28:08.212161] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:16.904 [2024-12-11 13:28:08.212179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:16.904 [2024-12-11 13:28:08.212190] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:16.904 [2024-12-11 13:28:08.212207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:16.904 [2024-12-11 13:28:08.212218] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:16.904 [2024-12-11 13:28:08.212232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.904 [2024-12-11 13:28:08.212243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:16.904 [2024-12-11 13:28:08.212258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:30:16.904 [2024-12-11 13:28:08.212268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.904 [2024-12-11 13:28:08.212349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.904 [2024-12-11 13:28:08.212371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:16.904 [2024-12-11 13:28:08.212385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:16.904 [2024-12-11 13:28:08.212395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.904 [2024-12-11 13:28:08.212491] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:16.904 [2024-12-11 13:28:08.212504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:16.904 [2024-12-11 13:28:08.212518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.904 [2024-12-11 13:28:08.212529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.904 [2024-12-11 13:28:08.212543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:16.904 [2024-12-11 13:28:08.212552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:16.904 [2024-12-11 13:28:08.212564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:16.904 [2024-12-11 13:28:08.212574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:16.904 [2024-12-11 13:28:08.212588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:16.904 [2024-12-11 13:28:08.212598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.904 [2024-12-11 13:28:08.212610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:16.904 [2024-12-11 13:28:08.212620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:16.904 [2024-12-11 13:28:08.212634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.904 [2024-12-11 13:28:08.212643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:16.904 [2024-12-11 13:28:08.212655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:16.904 [2024-12-11 13:28:08.212666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.904 [2024-12-11 13:28:08.212682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:16.904 [2024-12-11 13:28:08.212691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:16.904 [2024-12-11 13:28:08.212703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.904 [2024-12-11 13:28:08.212713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:16.904 [2024-12-11 13:28:08.212725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:16.904 [2024-12-11 13:28:08.212735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.904 [2024-12-11 13:28:08.212747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:16.904 [2024-12-11 13:28:08.212756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:16.904 [2024-12-11 13:28:08.212769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.905 [2024-12-11 13:28:08.212778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:16.905 [2024-12-11 13:28:08.212791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:16.905 [2024-12-11 13:28:08.212799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.905 [2024-12-11 13:28:08.212812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:16.905 [2024-12-11 13:28:08.212821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:16.905 [2024-12-11 13:28:08.212833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.905 [2024-12-11 13:28:08.212842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:16.905 [2024-12-11 13:28:08.212857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:16.905 [2024-12-11 13:28:08.212866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.905 [2024-12-11 13:28:08.212879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:16.905 [2024-12-11 13:28:08.212889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:16.905 [2024-12-11 13:28:08.212900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.905 [2024-12-11 13:28:08.212910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:16.905 [2024-12-11 13:28:08.212922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:16.905 [2024-12-11 13:28:08.212931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.905 [2024-12-11 13:28:08.212943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:16.905 [2024-12-11 13:28:08.212952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:16.905 [2024-12-11 13:28:08.212964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.905 [2024-12-11 13:28:08.212972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:16.905 [2024-12-11 13:28:08.212985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:16.905 [2024-12-11 13:28:08.212997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.905 [2024-12-11 13:28:08.213010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.905 [2024-12-11 13:28:08.213022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:16.905 [2024-12-11 13:28:08.213038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:16.905 [2024-12-11 13:28:08.213047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:16.905 [2024-12-11 13:28:08.213060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:16.905 [2024-12-11 13:28:08.213069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:16.905 [2024-12-11 13:28:08.213081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:16.905 [2024-12-11 13:28:08.213093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:16.905 [2024-12-11 13:28:08.213109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:16.905 [2024-12-11 13:28:08.213152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:16.905 [2024-12-11 13:28:08.213186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:16.905 [2024-12-11 13:28:08.213202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:16.905 [2024-12-11 13:28:08.213212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:16.905 [2024-12-11 13:28:08.213226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:16.905 [2024-12-11 13:28:08.213310] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:16.905 [2024-12-11 13:28:08.213325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.905 [2024-12-11 13:28:08.213349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:16.905 [2024-12-11 13:28:08.213360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:16.905 [2024-12-11 13:28:08.213373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:16.905 [2024-12-11 13:28:08.213384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.905 [2024-12-11 13:28:08.213397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:16.905 [2024-12-11 13:28:08.213408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.955 ms 00:30:16.905 [2024-12-11 13:28:08.213421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.905 [2024-12-11 13:28:08.213467] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:16.905 [2024-12-11 13:28:08.213486] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:21.102 [2024-12-11 13:28:11.814637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.814921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:21.102 [2024-12-11 13:28:11.814967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3607.013 ms 00:30:21.102 [2024-12-11 13:28:11.814983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.860007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.860070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:21.102 [2024-12-11 13:28:11.860089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.770 ms 00:30:21.102 [2024-12-11 13:28:11.860104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.860224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.860258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:21.102 [2024-12-11 13:28:11.860271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:21.102 [2024-12-11 13:28:11.860294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.912706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.912758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:21.102 [2024-12-11 13:28:11.912790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.433 ms 00:30:21.102 [2024-12-11 13:28:11.912805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.912844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.912865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:21.102 [2024-12-11 13:28:11.912877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:21.102 [2024-12-11 13:28:11.912892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.913719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.913745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:21.102 [2024-12-11 13:28:11.913771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.771 ms 00:30:21.102 [2024-12-11 13:28:11.913786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.913831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.913846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:21.102 [2024-12-11 13:28:11.913861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:21.102 [2024-12-11 13:28:11.913878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.940360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.940539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:21.102 [2024-12-11 13:28:11.940564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.502 ms 00:30:21.102 [2024-12-11 13:28:11.940579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:11.991071] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:21.102 [2024-12-11 13:28:11.992950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:11.993158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:21.102 [2024-12-11 13:28:11.993198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.354 ms 00:30:21.102 [2024-12-11 13:28:11.993215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.029049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.029217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:21.102 [2024-12-11 13:28:12.029263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.837 ms 00:30:21.102 [2024-12-11 13:28:12.029275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.029401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.029421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:21.102 [2024-12-11 13:28:12.029440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:30:21.102 [2024-12-11 13:28:12.029451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.064228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.064395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:21.102 [2024-12-11 13:28:12.064439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.769 ms 00:30:21.102 [2024-12-11 13:28:12.064451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.098912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.099060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:21.102 [2024-12-11 13:28:12.099102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.465 ms 00:30:21.102 [2024-12-11 13:28:12.099113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.099873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.099896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:21.102 [2024-12-11 13:28:12.099912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.663 ms 00:30:21.102 [2024-12-11 13:28:12.099926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.198923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.199066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:21.102 [2024-12-11 13:28:12.199114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.074 ms 00:30:21.102 [2024-12-11 13:28:12.199126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.236469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.236634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:21.102 [2024-12-11 13:28:12.236663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.247 ms 00:30:21.102 [2024-12-11 13:28:12.236675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.270907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.270942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:21.102 [2024-12-11 13:28:12.270959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.179 ms 00:30:21.102 [2024-12-11 13:28:12.270968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.305475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.305510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:21.102 [2024-12-11 13:28:12.305549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.517 ms 00:30:21.102 [2024-12-11 13:28:12.305559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.305609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.305621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:21.102 [2024-12-11 13:28:12.305640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:21.102 [2024-12-11 13:28:12.305651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.305777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.102 [2024-12-11 13:28:12.305792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:21.102 [2024-12-11 13:28:12.305807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:21.102 [2024-12-11 13:28:12.305817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.102 [2024-12-11 13:28:12.307515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4126.192 ms, result 0 00:30:21.102 { 00:30:21.102 "name": "ftl", 00:30:21.102 "uuid": "e6930e04-252c-40f4-a8eb-97a52b9652e9" 00:30:21.102 } 00:30:21.102 13:28:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:21.102 [2024-12-11 13:28:12.529837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.102 13:28:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:21.361 13:28:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:21.620 [2024-12-11 13:28:12.937867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:21.620 13:28:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:21.620 [2024-12-11 13:28:13.144479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:21.620 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:22.189 Fill FTL, iteration 1 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85072 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85072 /var/tmp/spdk.tgt.sock 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:22.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85072 ']' 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.189 13:28:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:22.189 [2024-12-11 13:28:13.623368] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:30:22.189 [2024-12-11 13:28:13.623522] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85072 ] 00:30:22.449 [2024-12-11 13:28:13.807174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.449 [2024-12-11 13:28:13.947616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.386 13:28:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.386 13:28:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:23.386 13:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:23.645 ftln1 00:30:23.645 13:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:23.645 13:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85072 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85072 ']' 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85072 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85072 00:30:23.904 killing process with pid 85072 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85072' 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85072 00:30:23.904 13:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85072 00:30:26.471 13:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:26.471 13:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:26.471 [2024-12-11 13:28:17.600647] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:30:26.471 [2024-12-11 13:28:17.600786] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85125 ] 00:30:26.471 [2024-12-11 13:28:17.784097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.471 [2024-12-11 13:28:17.889135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.850  [2024-12-11T13:28:20.357Z] Copying: 255/1024 [MB] (255 MBps) [2024-12-11T13:28:21.736Z] Copying: 512/1024 [MB] (257 MBps) [2024-12-11T13:28:22.674Z] Copying: 770/1024 [MB] (258 MBps) [2024-12-11T13:28:23.612Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:30:32.044 00:30:32.044 Calculate MD5 checksum, iteration 1 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:32.044 13:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:32.044 [2024-12-11 13:28:23.520011] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:30:32.044 [2024-12-11 13:28:23.520383] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85189 ] 00:30:32.304 [2024-12-11 13:28:23.705182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.304 [2024-12-11 13:28:23.809693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.683  [2024-12-11T13:28:26.187Z] Copying: 625/1024 [MB] (625 MBps) [2024-12-11T13:28:27.124Z] Copying: 1024/1024 [MB] (average 620 MBps) 00:30:35.556 00:30:35.556 13:28:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:35.556 13:28:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:36.935 Fill FTL, iteration 2 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f11afe52fc022ae5219c6506ca23f650 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:36.935 13:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:37.194 [2024-12-11 13:28:28.567900] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:30:37.194 [2024-12-11 13:28:28.568291] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85246 ] 00:30:37.194 [2024-12-11 13:28:28.748227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.453 [2024-12-11 13:28:28.851476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.832  [2024-12-11T13:28:31.338Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-11T13:28:32.717Z] Copying: 527/1024 [MB] (266 MBps) [2024-12-11T13:28:33.286Z] Copying: 792/1024 [MB] (265 MBps) [2024-12-11T13:28:34.664Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:30:43.096 00:30:43.096 Calculate MD5 checksum, iteration 2 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:43.096 13:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.096 [2024-12-11 13:28:34.386923] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:30:43.096 [2024-12-11 13:28:34.387039] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85303 ] 00:30:43.096 [2024-12-11 13:28:34.562068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.355 [2024-12-11 13:28:34.668369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.733  [2024-12-11T13:28:37.239Z] Copying: 630/1024 [MB] (630 MBps) [2024-12-11T13:28:38.177Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:30:46.609 00:30:46.609 13:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:46.609 13:28:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:48.515 13:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:48.515 13:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b3ff1c448a1b8bcc33bfade07ba96aae 00:30:48.515 13:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:48.515 13:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:48.515 13:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:48.515 [2024-12-11 13:28:39.953768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.515 [2024-12-11 13:28:39.954009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:48.515 [2024-12-11 13:28:39.954041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:48.515 [2024-12-11 13:28:39.954055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.515 [2024-12-11 13:28:39.954102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.515 [2024-12-11 13:28:39.954140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:48.515 [2024-12-11 13:28:39.954153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:48.515 [2024-12-11 13:28:39.954164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.515 [2024-12-11 13:28:39.954188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.515 [2024-12-11 13:28:39.954201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:48.515 [2024-12-11 13:28:39.954213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:48.515 [2024-12-11 13:28:39.954224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.515 [2024-12-11 13:28:39.954309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.517 ms, result 0 00:30:48.515 true 00:30:48.515 13:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.775 { 00:30:48.775 "name": "ftl", 00:30:48.775 "properties": [ 00:30:48.775 { 00:30:48.775 "name": "superblock_version", 00:30:48.775 "value": 5, 00:30:48.775 "read-only": true 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "name": "base_device", 00:30:48.775 "bands": [ 00:30:48.775 { 00:30:48.775 "id": 0, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 1, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 2, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 3, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 4, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 5, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 6, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 7, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 8, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 9, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 10, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 11, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 12, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.775 { 00:30:48.775 "id": 13, 00:30:48.775 "state": "FREE", 00:30:48.775 "validity": 0.0 00:30:48.775 }, 00:30:48.776 { 00:30:48.776 "id": 14, 00:30:48.776 "state": "FREE", 00:30:48.776 "validity": 0.0 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 15, 00:30:48.776 "state": "FREE", 00:30:48.776 "validity": 0.0 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 16, 00:30:48.776 "state": "FREE", 00:30:48.776 "validity": 0.0 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 17, 00:30:48.776 "state": "FREE", 00:30:48.776 "validity": 0.0 00:30:48.776 } 00:30:48.776 ], 00:30:48.776 "read-only": true 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "name": "cache_device", 00:30:48.776 "type": "bdev", 00:30:48.776 "chunks": [ 00:30:48.776 { 00:30:48.776 "id": 0, 00:30:48.776 "state": "INACTIVE", 00:30:48.776 "utilization": 0.0 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 1, 00:30:48.776 "state": "CLOSED", 00:30:48.776 "utilization": 1.0 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 2, 00:30:48.776 "state": "CLOSED", 00:30:48.776 "utilization": 1.0 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 3, 00:30:48.776 "state": "OPEN", 00:30:48.776 "utilization": 0.001953125 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "id": 4, 00:30:48.776 "state": "OPEN", 00:30:48.776 "utilization": 0.0 00:30:48.776 } 00:30:48.776 ], 00:30:48.776 "read-only": true 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "name": "verbose_mode", 00:30:48.776 "value": true, 00:30:48.776 "unit": "", 00:30:48.776 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:48.776 }, 00:30:48.776 { 00:30:48.776 "name": "prep_upgrade_on_shutdown", 00:30:48.776 "value": false, 00:30:48.776 "unit": "", 00:30:48.776 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:48.776 } 00:30:48.776 ] 00:30:48.776 } 00:30:48.776 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:49.035 [2024-12-11 13:28:40.393774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.035 [2024-12-11 13:28:40.393990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:49.035 [2024-12-11 13:28:40.394109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:49.035 [2024-12-11 13:28:40.394163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.035 [2024-12-11 13:28:40.394226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.035 [2024-12-11 13:28:40.394260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:49.035 [2024-12-11 13:28:40.394291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:49.035 [2024-12-11 13:28:40.394381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.035 [2024-12-11 13:28:40.394435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.035 [2024-12-11 13:28:40.394468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:49.035 [2024-12-11 13:28:40.394499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:49.035 [2024-12-11 13:28:40.394529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.035 [2024-12-11 13:28:40.394674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.891 ms, result 0 00:30:49.035 true 00:30:49.035 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:49.035 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:49.035 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:49.320 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:49.321 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:49.321 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:49.321 [2024-12-11 13:28:40.817731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.321 [2024-12-11 13:28:40.817782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:49.321 [2024-12-11 13:28:40.817798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:49.321 [2024-12-11 13:28:40.817809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.321 [2024-12-11 13:28:40.817836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.321 [2024-12-11 13:28:40.817847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:49.321 [2024-12-11 13:28:40.817858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:49.321 [2024-12-11 13:28:40.817868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.321 [2024-12-11 13:28:40.817889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.321 [2024-12-11 13:28:40.817900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:49.321 [2024-12-11 13:28:40.817911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:49.321 [2024-12-11 13:28:40.817921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.321 [2024-12-11 13:28:40.817986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.242 ms, result 0 00:30:49.321 true 00:30:49.321 13:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:49.584 { 00:30:49.584 "name": "ftl", 00:30:49.584 "properties": [ 00:30:49.584 { 00:30:49.584 "name": "superblock_version", 00:30:49.584 "value": 5, 00:30:49.584 "read-only": true 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "name": "base_device", 00:30:49.584 "bands": [ 00:30:49.584 { 00:30:49.584 "id": 0, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 1, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 2, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 3, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 4, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 5, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 6, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 7, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 8, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 9, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 10, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 11, 00:30:49.584 "state": "FREE", 00:30:49.584 "validity": 0.0 00:30:49.584 }, 00:30:49.584 { 00:30:49.584 "id": 12, 00:30:49.584 "state": "FREE", 00:30:49.585 "validity": 0.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 13, 00:30:49.585 "state": "FREE", 00:30:49.585 "validity": 0.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 14, 00:30:49.585 "state": "FREE", 00:30:49.585 "validity": 0.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 15, 00:30:49.585 "state": "FREE", 00:30:49.585 "validity": 0.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 16, 00:30:49.585 "state": "FREE", 00:30:49.585 "validity": 0.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 17, 00:30:49.585 "state": "FREE", 00:30:49.585 "validity": 0.0 00:30:49.585 } 00:30:49.585 ], 00:30:49.585 "read-only": true 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "name": "cache_device", 00:30:49.585 "type": "bdev", 00:30:49.585 "chunks": [ 00:30:49.585 { 00:30:49.585 "id": 0, 00:30:49.585 "state": "INACTIVE", 00:30:49.585 "utilization": 0.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 1, 00:30:49.585 "state": "CLOSED", 00:30:49.585 "utilization": 1.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 2, 00:30:49.585 "state": "CLOSED", 00:30:49.585 "utilization": 1.0 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 3, 00:30:49.585 "state": "OPEN", 00:30:49.585 "utilization": 0.001953125 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "id": 4, 00:30:49.585 "state": "OPEN", 00:30:49.585 "utilization": 0.0 00:30:49.585 } 00:30:49.585 ], 00:30:49.585 "read-only": true 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "name": "verbose_mode", 00:30:49.585 "value": true, 00:30:49.585 "unit": "", 00:30:49.585 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:49.585 }, 00:30:49.585 { 00:30:49.585 "name": "prep_upgrade_on_shutdown", 00:30:49.585 "value": true, 00:30:49.585 "unit": "", 00:30:49.585 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:49.585 } 00:30:49.585 ] 00:30:49.585 } 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84939 ]] 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84939 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84939 ']' 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84939 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84939 00:30:49.585 killing process with pid 84939 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84939' 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84939 00:30:49.585 13:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84939 00:30:50.965 [2024-12-11 13:28:42.281570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:50.965 [2024-12-11 13:28:42.300704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.965 [2024-12-11 13:28:42.300749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:50.965 [2024-12-11 13:28:42.300766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:50.965 [2024-12-11 13:28:42.300794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.965 [2024-12-11 13:28:42.300821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:50.965 [2024-12-11 13:28:42.305471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.965 [2024-12-11 13:28:42.305501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:50.965 [2024-12-11 13:28:42.305514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.639 ms 00:30:50.965 [2024-12-11 13:28:42.305552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.089 [2024-12-11 13:28:49.393437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.089 [2024-12-11 13:28:49.393507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:59.089 [2024-12-11 13:28:49.393541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7099.337 ms 00:30:59.089 [2024-12-11 13:28:49.393568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.089 [2024-12-11 13:28:49.394859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.089 [2024-12-11 13:28:49.394894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:59.089 [2024-12-11 13:28:49.394907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.271 ms 00:30:59.089 [2024-12-11 13:28:49.394919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.089 [2024-12-11 13:28:49.395875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.089 [2024-12-11 13:28:49.395897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:59.089 [2024-12-11 13:28:49.395910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.929 ms 00:30:59.089 [2024-12-11 13:28:49.395927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.089 [2024-12-11 13:28:49.411597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.089 [2024-12-11 13:28:49.411632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:59.089 [2024-12-11 13:28:49.411646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.656 ms 00:30:59.089 [2024-12-11 13:28:49.411672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.089 [2024-12-11 13:28:49.420945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.089 [2024-12-11 13:28:49.420986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:59.089 [2024-12-11 13:28:49.421001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.250 ms 00:30:59.089 [2024-12-11 13:28:49.421012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.421156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.421180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:59.090 [2024-12-11 13:28:49.421192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.106 ms 00:30:59.090 [2024-12-11 13:28:49.421203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.435872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.435907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:59.090 [2024-12-11 13:28:49.435920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.675 ms 00:30:59.090 [2024-12-11 13:28:49.435930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.450649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.450819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:59.090 [2024-12-11 13:28:49.450839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.706 ms 00:30:59.090 [2024-12-11 13:28:49.450866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.465366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.465527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:59.090 [2024-12-11 13:28:49.465548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.469 ms 00:30:59.090 [2024-12-11 13:28:49.465558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.480288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.480323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:59.090 [2024-12-11 13:28:49.480335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.672 ms 00:30:59.090 [2024-12-11 13:28:49.480345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.480379] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:59.090 [2024-12-11 13:28:49.480412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:59.090 [2024-12-11 13:28:49.480425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:59.090 [2024-12-11 13:28:49.480437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:59.090 [2024-12-11 13:28:49.480448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.090 [2024-12-11 13:28:49.480610] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:59.090 [2024-12-11 13:28:49.480620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e6930e04-252c-40f4-a8eb-97a52b9652e9 00:30:59.090 [2024-12-11 13:28:49.480631] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:59.090 [2024-12-11 13:28:49.480641] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:59.090 [2024-12-11 13:28:49.480652] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:59.090 [2024-12-11 13:28:49.480662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:59.090 [2024-12-11 13:28:49.480678] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:59.090 [2024-12-11 13:28:49.480689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:59.090 [2024-12-11 13:28:49.480703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:59.090 [2024-12-11 13:28:49.480712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:59.090 [2024-12-11 13:28:49.480722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:59.090 [2024-12-11 13:28:49.480732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.480744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:59.090 [2024-12-11 13:28:49.480755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:30:59.090 [2024-12-11 13:28:49.480765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.501563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.501597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:59.090 [2024-12-11 13:28:49.501632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.800 ms 00:30:59.090 [2024-12-11 13:28:49.501643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.502252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.090 [2024-12-11 13:28:49.502265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:59.090 [2024-12-11 13:28:49.502277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.586 ms 00:30:59.090 [2024-12-11 13:28:49.502289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.572286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.572331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:59.090 [2024-12-11 13:28:49.572345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.572357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.572395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.572406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:59.090 [2024-12-11 13:28:49.572417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.572428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.572531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.572546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:59.090 [2024-12-11 13:28:49.572562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.572573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.572593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.572604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:59.090 [2024-12-11 13:28:49.572615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.572626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.705134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.705217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:59.090 [2024-12-11 13:28:49.705241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.705253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.808877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.808941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:59.090 [2024-12-11 13:28:49.808959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.808972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.809108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.809138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:59.090 [2024-12-11 13:28:49.809150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.809161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.809227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.809263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:59.090 [2024-12-11 13:28:49.809276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.809286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.809421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.809436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:59.090 [2024-12-11 13:28:49.809447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.809458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.809505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.809527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:59.090 [2024-12-11 13:28:49.809538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.809549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.809597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.809610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:59.090 [2024-12-11 13:28:49.809621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.809631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.090 [2024-12-11 13:28:49.809690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.090 [2024-12-11 13:28:49.809703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:59.090 [2024-12-11 13:28:49.809715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.090 [2024-12-11 13:28:49.809726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.091 [2024-12-11 13:28:49.809886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7521.321 ms, result 0 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85510 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85510 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85510 ']' 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.384 13:28:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:02.384 [2024-12-11 13:28:53.593411] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:02.384 [2024-12-11 13:28:53.593753] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85510 ] 00:31:02.384 [2024-12-11 13:28:53.779550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.384 [2024-12-11 13:28:53.911897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.765 [2024-12-11 13:28:54.970409] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:03.765 [2024-12-11 13:28:54.970723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:03.765 [2024-12-11 13:28:55.117823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.118026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:03.765 [2024-12-11 13:28:55.118141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:03.765 [2024-12-11 13:28:55.118187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.118291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.118330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:03.765 [2024-12-11 13:28:55.118363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:31:03.765 [2024-12-11 13:28:55.118471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.118547] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:03.765 [2024-12-11 13:28:55.119627] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:03.765 [2024-12-11 13:28:55.119793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.119871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:03.765 [2024-12-11 13:28:55.119909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.265 ms 00:31:03.765 [2024-12-11 13:28:55.119923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.122550] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:03.765 [2024-12-11 13:28:55.142002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.142164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:03.765 [2024-12-11 13:28:55.142320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.485 ms 00:31:03.765 [2024-12-11 13:28:55.142359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.142446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.142539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:03.765 [2024-12-11 13:28:55.142576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:03.765 [2024-12-11 13:28:55.142605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.155042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.155199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:03.765 [2024-12-11 13:28:55.155329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.291 ms 00:31:03.765 [2024-12-11 13:28:55.155366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.155465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.155501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:03.765 [2024-12-11 13:28:55.155531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:03.765 [2024-12-11 13:28:55.155559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.155641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.155763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:03.765 [2024-12-11 13:28:55.155875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:03.765 [2024-12-11 13:28:55.155905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.155954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:03.765 [2024-12-11 13:28:55.161703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.161832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:03.765 [2024-12-11 13:28:55.161902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.765 ms 00:31:03.765 [2024-12-11 13:28:55.161944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.162008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.765 [2024-12-11 13:28:55.162040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:03.765 [2024-12-11 13:28:55.162127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:03.765 [2024-12-11 13:28:55.162165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.765 [2024-12-11 13:28:55.162232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:03.765 [2024-12-11 13:28:55.162285] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:03.765 [2024-12-11 13:28:55.162410] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:03.765 [2024-12-11 13:28:55.162511] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:03.765 [2024-12-11 13:28:55.162646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:03.765 [2024-12-11 13:28:55.162750] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:03.765 [2024-12-11 13:28:55.162803] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:03.765 [2024-12-11 13:28:55.162852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:03.765 [2024-12-11 13:28:55.163065] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:03.765 [2024-12-11 13:28:55.163131] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:03.766 [2024-12-11 13:28:55.163166] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:03.766 [2024-12-11 13:28:55.163195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:03.766 [2024-12-11 13:28:55.163225] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:03.766 [2024-12-11 13:28:55.163256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.766 [2024-12-11 13:28:55.163286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:03.766 [2024-12-11 13:28:55.163316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.029 ms 00:31:03.766 [2024-12-11 13:28:55.163345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.766 [2024-12-11 13:28:55.163451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.766 [2024-12-11 13:28:55.163538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:03.766 [2024-12-11 13:28:55.163558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:31:03.766 [2024-12-11 13:28:55.163569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.766 [2024-12-11 13:28:55.163660] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:03.766 [2024-12-11 13:28:55.163674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:03.766 [2024-12-11 13:28:55.163685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:03.766 [2024-12-11 13:28:55.163696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.163707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:03.766 [2024-12-11 13:28:55.163717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.163726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:03.766 [2024-12-11 13:28:55.163736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:03.766 [2024-12-11 13:28:55.163746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:03.766 [2024-12-11 13:28:55.163755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.163764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:03.766 [2024-12-11 13:28:55.163773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:03.766 [2024-12-11 13:28:55.163782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.163792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:03.766 [2024-12-11 13:28:55.163801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:03.766 [2024-12-11 13:28:55.163810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.163820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:03.766 [2024-12-11 13:28:55.163829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:03.766 [2024-12-11 13:28:55.163838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.163848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:03.766 [2024-12-11 13:28:55.163857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:03.766 [2024-12-11 13:28:55.163865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:03.766 [2024-12-11 13:28:55.163877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:03.766 [2024-12-11 13:28:55.163898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:03.766 [2024-12-11 13:28:55.163908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:03.766 [2024-12-11 13:28:55.163917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:03.766 [2024-12-11 13:28:55.163927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:03.766 [2024-12-11 13:28:55.163936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:03.766 [2024-12-11 13:28:55.163945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:03.766 [2024-12-11 13:28:55.163955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:03.766 [2024-12-11 13:28:55.163964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:03.766 [2024-12-11 13:28:55.163974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:03.766 [2024-12-11 13:28:55.163983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:03.766 [2024-12-11 13:28:55.163992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.164003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:03.766 [2024-12-11 13:28:55.164012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:03.766 [2024-12-11 13:28:55.164022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.164031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:03.766 [2024-12-11 13:28:55.164041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:03.766 [2024-12-11 13:28:55.164050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.164059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:03.766 [2024-12-11 13:28:55.164068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:03.766 [2024-12-11 13:28:55.164077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.164085] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:03.766 [2024-12-11 13:28:55.164096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:03.766 [2024-12-11 13:28:55.164105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:03.766 [2024-12-11 13:28:55.164127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:03.766 [2024-12-11 13:28:55.164154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:03.766 [2024-12-11 13:28:55.164164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:03.766 [2024-12-11 13:28:55.164173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:03.766 [2024-12-11 13:28:55.164183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:03.766 [2024-12-11 13:28:55.164192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:03.766 [2024-12-11 13:28:55.164202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:03.766 [2024-12-11 13:28:55.164214] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:03.766 [2024-12-11 13:28:55.164229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:03.766 [2024-12-11 13:28:55.164253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:03.766 [2024-12-11 13:28:55.164285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:03.766 [2024-12-11 13:28:55.164295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:03.766 [2024-12-11 13:28:55.164306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:03.766 [2024-12-11 13:28:55.164316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:03.766 [2024-12-11 13:28:55.164386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:03.766 [2024-12-11 13:28:55.164397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:03.766 [2024-12-11 13:28:55.164418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:03.766 [2024-12-11 13:28:55.164428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:03.766 [2024-12-11 13:28:55.164438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:03.766 [2024-12-11 13:28:55.164448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.766 [2024-12-11 13:28:55.164458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:03.766 [2024-12-11 13:28:55.164468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:31:03.766 [2024-12-11 13:28:55.164478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.766 [2024-12-11 13:28:55.164527] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:03.766 [2024-12-11 13:28:55.164540] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:07.968 [2024-12-11 13:28:58.696104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.968 [2024-12-11 13:28:58.696190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:07.968 [2024-12-11 13:28:58.696211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3537.309 ms 00:31:07.968 [2024-12-11 13:28:58.696223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.968 [2024-12-11 13:28:58.743550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.968 [2024-12-11 13:28:58.743611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:07.968 [2024-12-11 13:28:58.743629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.088 ms 00:31:07.968 [2024-12-11 13:28:58.743640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.968 [2024-12-11 13:28:58.743740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.968 [2024-12-11 13:28:58.743759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:07.968 [2024-12-11 13:28:58.743771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:07.968 [2024-12-11 13:28:58.743781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.968 [2024-12-11 13:28:58.796565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.968 [2024-12-11 13:28:58.796617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:07.968 [2024-12-11 13:28:58.796634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.766 ms 00:31:07.968 [2024-12-11 13:28:58.796649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.968 [2024-12-11 13:28:58.796698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.968 [2024-12-11 13:28:58.796710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:07.968 [2024-12-11 13:28:58.796722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:07.968 [2024-12-11 13:28:58.796732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.968 [2024-12-11 13:28:58.797600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.968 [2024-12-11 13:28:58.797617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:07.968 [2024-12-11 13:28:58.797630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.811 ms 00:31:07.968 [2024-12-11 13:28:58.797640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.797691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.797703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:07.969 [2024-12-11 13:28:58.797714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:07.969 [2024-12-11 13:28:58.797742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.822995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.823037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:07.969 [2024-12-11 13:28:58.823051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.267 ms 00:31:07.969 [2024-12-11 13:28:58.823062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.870052] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:07.969 [2024-12-11 13:28:58.870101] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:07.969 [2024-12-11 13:28:58.870132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.870146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:07.969 [2024-12-11 13:28:58.870160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.008 ms 00:31:07.969 [2024-12-11 13:28:58.870172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.889667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.889723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:07.969 [2024-12-11 13:28:58.889738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.466 ms 00:31:07.969 [2024-12-11 13:28:58.889749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.906879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.906918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:07.969 [2024-12-11 13:28:58.906932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.103 ms 00:31:07.969 [2024-12-11 13:28:58.906942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.924458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.924601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:07.969 [2024-12-11 13:28:58.924621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.501 ms 00:31:07.969 [2024-12-11 13:28:58.924647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:58.925398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:58.925427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:07.969 [2024-12-11 13:28:58.925441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.621 ms 00:31:07.969 [2024-12-11 13:28:58.925452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.018069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.018163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:07.969 [2024-12-11 13:28:59.018181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.737 ms 00:31:07.969 [2024-12-11 13:28:59.018209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.028585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:07.969 [2024-12-11 13:28:59.029369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.029452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:07.969 [2024-12-11 13:28:59.029469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.123 ms 00:31:07.969 [2024-12-11 13:28:59.029481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.029570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.029589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:07.969 [2024-12-11 13:28:59.029602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:07.969 [2024-12-11 13:28:59.029613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.029682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.029695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:07.969 [2024-12-11 13:28:59.029707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:07.969 [2024-12-11 13:28:59.029718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.029748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.029761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:07.969 [2024-12-11 13:28:59.029776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:07.969 [2024-12-11 13:28:59.029787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.029827] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:07.969 [2024-12-11 13:28:59.029840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.029850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:07.969 [2024-12-11 13:28:59.029862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:07.969 [2024-12-11 13:28:59.029873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.064762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.064806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:07.969 [2024-12-11 13:28:59.064820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.918 ms 00:31:07.969 [2024-12-11 13:28:59.064830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.064910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.969 [2024-12-11 13:28:59.064923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:07.969 [2024-12-11 13:28:59.064935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:07.969 [2024-12-11 13:28:59.064945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.969 [2024-12-11 13:28:59.066535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3954.575 ms, result 0 00:31:07.969 [2024-12-11 13:28:59.081106] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.969 [2024-12-11 13:28:59.097095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:07.969 [2024-12-11 13:28:59.106081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:08.229 13:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.229 13:28:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:08.229 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:08.229 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:08.229 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:08.229 [2024-12-11 13:28:59.765718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.229 [2024-12-11 13:28:59.765768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:08.229 [2024-12-11 13:28:59.765789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:08.229 [2024-12-11 13:28:59.765800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.229 [2024-12-11 13:28:59.765825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.229 [2024-12-11 13:28:59.765836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:08.229 [2024-12-11 13:28:59.765848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:08.229 [2024-12-11 13:28:59.765858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.229 [2024-12-11 13:28:59.765879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.229 [2024-12-11 13:28:59.765890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:08.229 [2024-12-11 13:28:59.765902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:08.229 [2024-12-11 13:28:59.765912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.229 [2024-12-11 13:28:59.765977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.250 ms, result 0 00:31:08.229 true 00:31:08.229 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:08.488 { 00:31:08.488 "name": "ftl", 00:31:08.488 "properties": [ 00:31:08.488 { 00:31:08.488 "name": "superblock_version", 00:31:08.488 "value": 5, 00:31:08.488 "read-only": true 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "name": "base_device", 00:31:08.488 "bands": [ 00:31:08.488 { 00:31:08.488 "id": 0, 00:31:08.488 "state": "CLOSED", 00:31:08.488 "validity": 1.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 1, 00:31:08.488 "state": "CLOSED", 00:31:08.488 "validity": 1.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 2, 00:31:08.488 "state": "CLOSED", 00:31:08.488 "validity": 0.007843137254901933 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 3, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 4, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 5, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 6, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 7, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 8, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 9, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 10, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 11, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 12, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 13, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 14, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 15, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 16, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 17, 00:31:08.488 "state": "FREE", 00:31:08.488 "validity": 0.0 00:31:08.488 } 00:31:08.488 ], 00:31:08.488 "read-only": true 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "name": "cache_device", 00:31:08.488 "type": "bdev", 00:31:08.488 "chunks": [ 00:31:08.488 { 00:31:08.488 "id": 0, 00:31:08.488 "state": "INACTIVE", 00:31:08.488 "utilization": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 1, 00:31:08.488 "state": "OPEN", 00:31:08.488 "utilization": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 2, 00:31:08.488 "state": "OPEN", 00:31:08.488 "utilization": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 3, 00:31:08.488 "state": "FREE", 00:31:08.488 "utilization": 0.0 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "id": 4, 00:31:08.488 "state": "FREE", 00:31:08.488 "utilization": 0.0 00:31:08.488 } 00:31:08.488 ], 00:31:08.488 "read-only": true 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "name": "verbose_mode", 00:31:08.488 "value": true, 00:31:08.488 "unit": "", 00:31:08.488 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:08.488 }, 00:31:08.488 { 00:31:08.488 "name": "prep_upgrade_on_shutdown", 00:31:08.488 "value": false, 00:31:08.488 "unit": "", 00:31:08.488 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:08.488 } 00:31:08.488 ] 00:31:08.488 } 00:31:08.488 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:08.488 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:08.488 13:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:08.748 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:08.748 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:08.748 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:08.748 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:08.748 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:09.008 Validate MD5 checksum, iteration 1 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:09.008 13:29:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:09.008 [2024-12-11 13:29:00.500334] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:09.008 [2024-12-11 13:29:00.500629] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85602 ] 00:31:09.267 [2024-12-11 13:29:00.675953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.267 [2024-12-11 13:29:00.783727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.173  [2024-12-11T13:29:03.309Z] Copying: 630/1024 [MB] (630 MBps) [2024-12-11T13:29:04.687Z] Copying: 1024/1024 [MB] (average 619 MBps) 00:31:13.119 00:31:13.119 13:29:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:13.119 13:29:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:15.020 Validate MD5 checksum, iteration 2 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f11afe52fc022ae5219c6506ca23f650 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f11afe52fc022ae5219c6506ca23f650 != \f\1\1\a\f\e\5\2\f\c\0\2\2\a\e\5\2\1\9\c\6\5\0\6\c\a\2\3\f\6\5\0 ]] 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:15.020 13:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:15.020 [2024-12-11 13:29:06.251629] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:15.020 [2024-12-11 13:29:06.251938] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85665 ] 00:31:15.020 [2024-12-11 13:29:06.430915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.020 [2024-12-11 13:29:06.533546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.927  [2024-12-11T13:29:09.064Z] Copying: 630/1024 [MB] (630 MBps) [2024-12-11T13:29:12.355Z] Copying: 1024/1024 [MB] (average 626 MBps) 00:31:20.787 00:31:20.787 13:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:20.787 13:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b3ff1c448a1b8bcc33bfade07ba96aae 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b3ff1c448a1b8bcc33bfade07ba96aae != \b\3\f\f\1\c\4\4\8\a\1\b\8\b\c\c\3\3\b\f\a\d\e\0\7\b\a\9\6\a\a\e ]] 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85510 ]] 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85510 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85743 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85743 00:31:22.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85743 ']' 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.220 13:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:22.220 [2024-12-11 13:29:13.684074] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:22.220 [2024-12-11 13:29:13.684214] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85743 ] 00:31:22.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85510 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:22.479 [2024-12-11 13:29:13.866079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.479 [2024-12-11 13:29:13.997193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.860 [2024-12-11 13:29:15.045047] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:23.860 [2024-12-11 13:29:15.045142] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:23.860 [2024-12-11 13:29:15.192538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.192584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:23.860 [2024-12-11 13:29:15.192600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:23.860 [2024-12-11 13:29:15.192611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.192668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.192680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:23.860 [2024-12-11 13:29:15.192691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:23.860 [2024-12-11 13:29:15.192701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.192730] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:23.860 [2024-12-11 13:29:15.193632] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:23.860 [2024-12-11 13:29:15.193658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.193669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:23.860 [2024-12-11 13:29:15.193681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.940 ms 00:31:23.860 [2024-12-11 13:29:15.193691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.194166] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:23.860 [2024-12-11 13:29:15.219432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.219473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:23.860 [2024-12-11 13:29:15.219487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.307 ms 00:31:23.860 [2024-12-11 13:29:15.219514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.232654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.232694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:23.860 [2024-12-11 13:29:15.232707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:23.860 [2024-12-11 13:29:15.232716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.233397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.233459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:23.860 [2024-12-11 13:29:15.233499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:31:23.860 [2024-12-11 13:29:15.233543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.233711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.233756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:23.860 [2024-12-11 13:29:15.233790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:31:23.860 [2024-12-11 13:29:15.233822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.233937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.233976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:23.860 [2024-12-11 13:29:15.234010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:23.860 [2024-12-11 13:29:15.234023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.234052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:23.860 [2024-12-11 13:29:15.237980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.238110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:23.860 [2024-12-11 13:29:15.238158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.941 ms 00:31:23.860 [2024-12-11 13:29:15.238170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.238218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.238230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:23.860 [2024-12-11 13:29:15.238241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:23.860 [2024-12-11 13:29:15.238252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.238288] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:23.860 [2024-12-11 13:29:15.238318] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:23.860 [2024-12-11 13:29:15.238354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:23.860 [2024-12-11 13:29:15.238376] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:23.860 [2024-12-11 13:29:15.238470] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:23.860 [2024-12-11 13:29:15.238485] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:23.860 [2024-12-11 13:29:15.238498] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:23.860 [2024-12-11 13:29:15.238512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:23.860 [2024-12-11 13:29:15.238524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:23.860 [2024-12-11 13:29:15.238536] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:23.860 [2024-12-11 13:29:15.238547] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:23.860 [2024-12-11 13:29:15.238557] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:23.860 [2024-12-11 13:29:15.238567] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:23.860 [2024-12-11 13:29:15.238578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.238592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:23.860 [2024-12-11 13:29:15.238604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:31:23.860 [2024-12-11 13:29:15.238613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.238685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.860 [2024-12-11 13:29:15.238696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:23.860 [2024-12-11 13:29:15.238706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:23.860 [2024-12-11 13:29:15.238716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.860 [2024-12-11 13:29:15.238806] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:23.860 [2024-12-11 13:29:15.238819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:23.860 [2024-12-11 13:29:15.238835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:23.860 [2024-12-11 13:29:15.238845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.860 [2024-12-11 13:29:15.238856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:23.860 [2024-12-11 13:29:15.238866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:23.860 [2024-12-11 13:29:15.238876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:23.860 [2024-12-11 13:29:15.238885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:23.860 [2024-12-11 13:29:15.238897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:23.861 [2024-12-11 13:29:15.238906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.238916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:23.861 [2024-12-11 13:29:15.238926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:23.861 [2024-12-11 13:29:15.238936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.238946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:23.861 [2024-12-11 13:29:15.238955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:23.861 [2024-12-11 13:29:15.238964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.238974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:23.861 [2024-12-11 13:29:15.238984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:23.861 [2024-12-11 13:29:15.238993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:23.861 [2024-12-11 13:29:15.239011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:23.861 [2024-12-11 13:29:15.239032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:23.861 [2024-12-11 13:29:15.239052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:23.861 [2024-12-11 13:29:15.239061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:23.861 [2024-12-11 13:29:15.239081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:23.861 [2024-12-11 13:29:15.239090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:23.861 [2024-12-11 13:29:15.239109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:23.861 [2024-12-11 13:29:15.239137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:23.861 [2024-12-11 13:29:15.239157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:23.861 [2024-12-11 13:29:15.239166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:23.861 [2024-12-11 13:29:15.239186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:23.861 [2024-12-11 13:29:15.239213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:23.861 [2024-12-11 13:29:15.239242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:23.861 [2024-12-11 13:29:15.239252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239261] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:23.861 [2024-12-11 13:29:15.239273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:23.861 [2024-12-11 13:29:15.239283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:23.861 [2024-12-11 13:29:15.239314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:23.861 [2024-12-11 13:29:15.239324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:23.861 [2024-12-11 13:29:15.239334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:23.861 [2024-12-11 13:29:15.239343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:23.861 [2024-12-11 13:29:15.239352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:23.861 [2024-12-11 13:29:15.239362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:23.861 [2024-12-11 13:29:15.239372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:23.861 [2024-12-11 13:29:15.239384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:23.861 [2024-12-11 13:29:15.239406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:23.861 [2024-12-11 13:29:15.239437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:23.861 [2024-12-11 13:29:15.239448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:23.861 [2024-12-11 13:29:15.239459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:23.861 [2024-12-11 13:29:15.239470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:23.861 [2024-12-11 13:29:15.239541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:23.861 [2024-12-11 13:29:15.239562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:23.861 [2024-12-11 13:29:15.239589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:23.861 [2024-12-11 13:29:15.239599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:23.861 [2024-12-11 13:29:15.239614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:23.861 [2024-12-11 13:29:15.239625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.239635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:23.861 [2024-12-11 13:29:15.239645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:31:23.861 [2024-12-11 13:29:15.239655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.280292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.280475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:23.861 [2024-12-11 13:29:15.280497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.653 ms 00:31:23.861 [2024-12-11 13:29:15.280510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.280555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.280567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:23.861 [2024-12-11 13:29:15.280578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:23.861 [2024-12-11 13:29:15.280589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.331336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.331510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:23.861 [2024-12-11 13:29:15.331533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.766 ms 00:31:23.861 [2024-12-11 13:29:15.331545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.331588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.331600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:23.861 [2024-12-11 13:29:15.331612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:23.861 [2024-12-11 13:29:15.331629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.331775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.331789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:23.861 [2024-12-11 13:29:15.331801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:31:23.861 [2024-12-11 13:29:15.331812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.331859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.331871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:23.861 [2024-12-11 13:29:15.331882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:23.861 [2024-12-11 13:29:15.331893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.357457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.357493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:23.861 [2024-12-11 13:29:15.357507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.574 ms 00:31:23.861 [2024-12-11 13:29:15.357530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.357666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.357682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:23.861 [2024-12-11 13:29:15.357694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:23.861 [2024-12-11 13:29:15.357704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.861 [2024-12-11 13:29:15.411362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:23.861 [2024-12-11 13:29:15.411512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:23.861 [2024-12-11 13:29:15.411552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.717 ms 00:31:23.861 [2024-12-11 13:29:15.411565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.425374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.425412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:24.120 [2024-12-11 13:29:15.425453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.657 ms 00:31:24.120 [2024-12-11 13:29:15.425464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.516701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.516970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:24.120 [2024-12-11 13:29:15.517015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 91.307 ms 00:31:24.120 [2024-12-11 13:29:15.517028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.517278] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:24.120 [2024-12-11 13:29:15.517453] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:24.120 [2024-12-11 13:29:15.517634] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:24.120 [2024-12-11 13:29:15.517793] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:24.120 [2024-12-11 13:29:15.517808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.517822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:24.120 [2024-12-11 13:29:15.517834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:31:24.120 [2024-12-11 13:29:15.517846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.517936] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:24.120 [2024-12-11 13:29:15.517953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.517970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:24.120 [2024-12-11 13:29:15.517982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:24.120 [2024-12-11 13:29:15.517993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.539501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.539648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:24.120 [2024-12-11 13:29:15.539687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.499 ms 00:31:24.120 [2024-12-11 13:29:15.539699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.552680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.552715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:24.120 [2024-12-11 13:29:15.552729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:24.120 [2024-12-11 13:29:15.552739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.120 [2024-12-11 13:29:15.552870] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:24.120 [2024-12-11 13:29:15.553238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.120 [2024-12-11 13:29:15.553252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:24.120 [2024-12-11 13:29:15.553278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.370 ms 00:31:24.120 [2024-12-11 13:29:15.553290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.689 [2024-12-11 13:29:16.147571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.689 [2024-12-11 13:29:16.147653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:24.689 [2024-12-11 13:29:16.147673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 594.027 ms 00:31:24.689 [2024-12-11 13:29:16.147687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.689 [2024-12-11 13:29:16.153178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.689 [2024-12-11 13:29:16.153222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:24.689 [2024-12-11 13:29:16.153237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:31:24.689 [2024-12-11 13:29:16.153249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.689 [2024-12-11 13:29:16.153853] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:24.689 [2024-12-11 13:29:16.153881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.689 [2024-12-11 13:29:16.153893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:24.689 [2024-12-11 13:29:16.153906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.593 ms 00:31:24.689 [2024-12-11 13:29:16.153917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.689 [2024-12-11 13:29:16.154022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.689 [2024-12-11 13:29:16.154036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:24.689 [2024-12-11 13:29:16.154048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:24.689 [2024-12-11 13:29:16.154065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.689 [2024-12-11 13:29:16.154104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 602.210 ms, result 0 00:31:24.689 [2024-12-11 13:29:16.154165] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:24.689 [2024-12-11 13:29:16.154248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.689 [2024-12-11 13:29:16.154258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:24.689 [2024-12-11 13:29:16.154268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:31:24.689 [2024-12-11 13:29:16.154278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.728204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.728288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:25.258 [2024-12-11 13:29:16.728327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 573.643 ms 00:31:25.258 [2024-12-11 13:29:16.728339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.734109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.734159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:25.258 [2024-12-11 13:29:16.734173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:31:25.258 [2024-12-11 13:29:16.734183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.734741] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:25.258 [2024-12-11 13:29:16.734764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.734776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:25.258 [2024-12-11 13:29:16.734788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:31:25.258 [2024-12-11 13:29:16.734799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.734832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.734845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:25.258 [2024-12-11 13:29:16.734855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:25.258 [2024-12-11 13:29:16.734866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.734905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 581.681 ms, result 0 00:31:25.258 [2024-12-11 13:29:16.734955] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:25.258 [2024-12-11 13:29:16.734969] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:25.258 [2024-12-11 13:29:16.734983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.734994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:25.258 [2024-12-11 13:29:16.735005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1184.058 ms 00:31:25.258 [2024-12-11 13:29:16.735016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.735050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.735068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:25.258 [2024-12-11 13:29:16.735079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:25.258 [2024-12-11 13:29:16.735090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.747022] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:25.258 [2024-12-11 13:29:16.747321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.747371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:25.258 [2024-12-11 13:29:16.747462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.232 ms 00:31:25.258 [2024-12-11 13:29:16.747499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.748209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.748334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:25.258 [2024-12-11 13:29:16.748427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:31:25.258 [2024-12-11 13:29:16.748463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.750553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.258 [2024-12-11 13:29:16.750697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:25.258 [2024-12-11 13:29:16.750825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.045 ms 00:31:25.258 [2024-12-11 13:29:16.750862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.258 [2024-12-11 13:29:16.750935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.259 [2024-12-11 13:29:16.751077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:25.259 [2024-12-11 13:29:16.751126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:25.259 [2024-12-11 13:29:16.751167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.259 [2024-12-11 13:29:16.751310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.259 [2024-12-11 13:29:16.751346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:25.259 [2024-12-11 13:29:16.751360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:25.259 [2024-12-11 13:29:16.751370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.259 [2024-12-11 13:29:16.751394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.259 [2024-12-11 13:29:16.751406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:25.259 [2024-12-11 13:29:16.751417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:25.259 [2024-12-11 13:29:16.751427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.259 [2024-12-11 13:29:16.751494] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:25.259 [2024-12-11 13:29:16.751507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.259 [2024-12-11 13:29:16.751517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:25.259 [2024-12-11 13:29:16.751529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:25.259 [2024-12-11 13:29:16.751540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.259 [2024-12-11 13:29:16.751598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.259 [2024-12-11 13:29:16.751609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:25.259 [2024-12-11 13:29:16.751620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:25.259 [2024-12-11 13:29:16.751631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.259 [2024-12-11 13:29:16.752837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1562.311 ms, result 0 00:31:25.259 [2024-12-11 13:29:16.768045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.259 [2024-12-11 13:29:16.784015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:25.259 [2024-12-11 13:29:16.794372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:25.518 Validate MD5 checksum, iteration 1 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:25.518 13:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:25.518 [2024-12-11 13:29:16.916388] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:25.518 [2024-12-11 13:29:16.916730] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85778 ] 00:31:25.777 [2024-12-11 13:29:17.093714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.777 [2024-12-11 13:29:17.201077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.682  [2024-12-11T13:29:19.512Z] Copying: 642/1024 [MB] (642 MBps) [2024-12-11T13:29:22.048Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:31:30.480 00:31:30.738 13:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:30.738 13:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:32.640 Validate MD5 checksum, iteration 2 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f11afe52fc022ae5219c6506ca23f650 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f11afe52fc022ae5219c6506ca23f650 != \f\1\1\a\f\e\5\2\f\c\0\2\2\a\e\5\2\1\9\c\6\5\0\6\c\a\2\3\f\6\5\0 ]] 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:32.640 13:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:32.640 [2024-12-11 13:29:23.874789] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:32.640 [2024-12-11 13:29:23.875242] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85856 ] 00:31:32.640 [2024-12-11 13:29:24.056766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.640 [2024-12-11 13:29:24.159043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.543  [2024-12-11T13:29:26.678Z] Copying: 633/1024 [MB] (633 MBps) [2024-12-11T13:29:27.614Z] Copying: 1024/1024 [MB] (average 613 MBps) 00:31:36.046 00:31:36.046 13:29:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:36.046 13:29:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b3ff1c448a1b8bcc33bfade07ba96aae 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b3ff1c448a1b8bcc33bfade07ba96aae != \b\3\f\f\1\c\4\4\8\a\1\b\8\b\c\c\3\3\b\f\a\d\e\0\7\b\a\9\6\a\a\e ]] 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85743 ]] 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85743 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85743 ']' 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85743 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85743 00:31:37.949 killing process with pid 85743 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85743' 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85743 00:31:37.949 13:29:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85743 00:31:39.330 [2024-12-11 13:29:30.533151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:39.330 [2024-12-11 13:29:30.553630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.553674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:39.330 [2024-12-11 13:29:30.553691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:39.330 [2024-12-11 13:29:30.553718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.553744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:39.330 [2024-12-11 13:29:30.558212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.558244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:39.330 [2024-12-11 13:29:30.558262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.457 ms 00:31:39.330 [2024-12-11 13:29:30.558288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.558508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.558522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:39.330 [2024-12-11 13:29:30.558533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:31:39.330 [2024-12-11 13:29:30.558544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.559719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.559756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:39.330 [2024-12-11 13:29:30.559768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 00:31:39.330 [2024-12-11 13:29:30.559785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.560715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.560743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:39.330 [2024-12-11 13:29:30.560755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:31:39.330 [2024-12-11 13:29:30.560765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.574827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.574870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:39.330 [2024-12-11 13:29:30.574883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.023 ms 00:31:39.330 [2024-12-11 13:29:30.574899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.582783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.582817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:39.330 [2024-12-11 13:29:30.582831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.841 ms 00:31:39.330 [2024-12-11 13:29:30.582841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.582935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.582947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:39.330 [2024-12-11 13:29:30.582959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:39.330 [2024-12-11 13:29:30.582975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.597235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.597267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:39.330 [2024-12-11 13:29:30.597279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.266 ms 00:31:39.330 [2024-12-11 13:29:30.597289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.611589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.611623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:39.330 [2024-12-11 13:29:30.611635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.273 ms 00:31:39.330 [2024-12-11 13:29:30.611644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.625384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.625416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:39.330 [2024-12-11 13:29:30.625429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.711 ms 00:31:39.330 [2024-12-11 13:29:30.625438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.640011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.330 [2024-12-11 13:29:30.640047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:39.330 [2024-12-11 13:29:30.640075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.502 ms 00:31:39.330 [2024-12-11 13:29:30.640085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.330 [2024-12-11 13:29:30.640129] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:39.330 [2024-12-11 13:29:30.640148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:39.330 [2024-12-11 13:29:30.640162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:39.330 [2024-12-11 13:29:30.640173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:39.330 [2024-12-11 13:29:30.640185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:39.330 [2024-12-11 13:29:30.640196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:39.330 [2024-12-11 13:29:30.640208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:39.330 [2024-12-11 13:29:30.640219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:39.330 [2024-12-11 13:29:30.640230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:39.331 [2024-12-11 13:29:30.640349] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:39.331 [2024-12-11 13:29:30.640360] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e6930e04-252c-40f4-a8eb-97a52b9652e9 00:31:39.331 [2024-12-11 13:29:30.640371] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:39.331 [2024-12-11 13:29:30.640381] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:39.331 [2024-12-11 13:29:30.640391] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:39.331 [2024-12-11 13:29:30.640402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:39.331 [2024-12-11 13:29:30.640412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:39.331 [2024-12-11 13:29:30.640423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:39.331 [2024-12-11 13:29:30.640439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:39.331 [2024-12-11 13:29:30.640448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:39.331 [2024-12-11 13:29:30.640457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:39.331 [2024-12-11 13:29:30.640467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.331 [2024-12-11 13:29:30.640479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:39.331 [2024-12-11 13:29:30.640493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.339 ms 00:31:39.331 [2024-12-11 13:29:30.640504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.661342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.331 [2024-12-11 13:29:30.661388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:39.331 [2024-12-11 13:29:30.661418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.840 ms 00:31:39.331 [2024-12-11 13:29:30.661429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.661985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.331 [2024-12-11 13:29:30.662004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:39.331 [2024-12-11 13:29:30.662015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:31:39.331 [2024-12-11 13:29:30.662026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.729889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.331 [2024-12-11 13:29:30.729932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:39.331 [2024-12-11 13:29:30.729946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.331 [2024-12-11 13:29:30.729979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.730017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.331 [2024-12-11 13:29:30.730029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:39.331 [2024-12-11 13:29:30.730041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.331 [2024-12-11 13:29:30.730052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.730154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.331 [2024-12-11 13:29:30.730169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:39.331 [2024-12-11 13:29:30.730181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.331 [2024-12-11 13:29:30.730191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.730216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.331 [2024-12-11 13:29:30.730228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:39.331 [2024-12-11 13:29:30.730240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.331 [2024-12-11 13:29:30.730251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.331 [2024-12-11 13:29:30.863343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.331 [2024-12-11 13:29:30.863419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:39.331 [2024-12-11 13:29:30.863436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.331 [2024-12-11 13:29:30.863449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.590 [2024-12-11 13:29:30.966453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.966514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:39.591 [2024-12-11 13:29:30.966531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.966559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.966698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.966711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:39.591 [2024-12-11 13:29:30.966724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.966735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.966793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.966824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:39.591 [2024-12-11 13:29:30.966836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.966847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.966985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.966999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:39.591 [2024-12-11 13:29:30.967011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.967022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.967062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.967075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:39.591 [2024-12-11 13:29:30.967092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.967102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.967166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.967180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:39.591 [2024-12-11 13:29:30.967192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.967202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.967255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:39.591 [2024-12-11 13:29:30.967272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:39.591 [2024-12-11 13:29:30.967283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:39.591 [2024-12-11 13:29:30.967293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.591 [2024-12-11 13:29:30.967441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 414.437 ms, result 0 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:40.969 Remove shared memory files 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85510 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:40.969 00:31:40.969 real 1m28.237s 00:31:40.969 user 1m57.339s 00:31:40.969 sys 0m24.733s 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.969 13:29:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:40.969 ************************************ 00:31:40.969 END TEST ftl_upgrade_shutdown 00:31:40.969 ************************************ 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@14 -- # killprocess 78152 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 78152 ']' 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@958 -- # kill -0 78152 00:31:40.969 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78152) - No such process 00:31:40.969 Process with pid 78152 is not found 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 78152 is not found' 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85982 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:40.969 13:29:32 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85982 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@835 -- # '[' -z 85982 ']' 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.969 13:29:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:40.969 [2024-12-11 13:29:32.530511] Starting SPDK v25.01-pre git sha1 bcaf208e3 / DPDK 24.03.0 initialization... 00:31:40.969 [2024-12-11 13:29:32.530623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85982 ] 00:31:41.228 [2024-12-11 13:29:32.712881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.488 [2024-12-11 13:29:32.846433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.426 13:29:33 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:42.426 13:29:33 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:42.426 13:29:33 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:42.685 nvme0n1 00:31:42.685 13:29:34 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:42.685 13:29:34 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:42.685 13:29:34 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:42.944 13:29:34 ftl -- ftl/common.sh@28 -- # stores=89acc053-34bb-4a68-8b9b-ed78bf9ff8f5 00:31:42.944 13:29:34 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:42.944 13:29:34 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89acc053-34bb-4a68-8b9b-ed78bf9ff8f5 00:31:42.944 13:29:34 ftl -- ftl/ftl.sh@23 -- # killprocess 85982 00:31:42.944 13:29:34 ftl -- common/autotest_common.sh@954 -- # '[' -z 85982 ']' 00:31:42.944 13:29:34 ftl -- common/autotest_common.sh@958 -- # kill -0 85982 00:31:42.944 13:29:34 ftl -- common/autotest_common.sh@959 -- # uname 00:31:42.944 13:29:34 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.944 13:29:34 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85982 00:31:43.204 13:29:34 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.204 13:29:34 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.204 killing process with pid 85982 00:31:43.204 13:29:34 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85982' 00:31:43.204 13:29:34 ftl -- common/autotest_common.sh@973 -- # kill 85982 00:31:43.204 13:29:34 ftl -- common/autotest_common.sh@978 -- # wait 85982 00:31:45.770 13:29:37 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:45.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:46.034 Waiting for block devices as requested 00:31:46.034 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:46.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:46.294 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:46.554 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:51.835 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:51.835 13:29:42 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:51.835 Remove shared memory files 00:31:51.835 13:29:42 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:51.835 13:29:42 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:51.835 13:29:42 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:51.835 13:29:43 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:51.835 13:29:43 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:51.835 13:29:43 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:51.835 ************************************ 00:31:51.835 END TEST ftl 00:31:51.835 ************************************ 00:31:51.835 00:31:51.836 real 11m37.954s 00:31:51.836 user 14m2.410s 00:31:51.836 sys 1m38.988s 00:31:51.836 13:29:43 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.836 13:29:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:51.836 13:29:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:51.836 13:29:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:51.836 13:29:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:51.836 13:29:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:51.836 13:29:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:51.836 13:29:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:51.836 13:29:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:51.836 13:29:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:51.836 13:29:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:51.836 13:29:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:51.836 13:29:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.836 13:29:43 -- common/autotest_common.sh@10 -- # set +x 00:31:51.836 13:29:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:51.836 13:29:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:51.836 13:29:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:51.836 13:29:43 -- common/autotest_common.sh@10 -- # set +x 00:31:54.373 INFO: APP EXITING 00:31:54.374 INFO: killing all VMs 00:31:54.374 INFO: killing vhost app 00:31:54.374 INFO: EXIT DONE 00:31:54.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:54.942 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:54.942 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:54.942 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:55.201 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:55.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:56.031 Cleaning 00:31:56.031 Removing: /var/run/dpdk/spdk0/config 00:31:56.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:56.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:56.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:56.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:56.031 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:56.031 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:56.031 Removing: /var/run/dpdk/spdk0 00:31:56.031 Removing: /var/run/dpdk/spdk_pid58733 00:31:56.031 Removing: /var/run/dpdk/spdk_pid58985 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59225 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59329 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59385 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59524 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59553 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59763 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59881 00:31:56.031 Removing: /var/run/dpdk/spdk_pid59999 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60127 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60239 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60280 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60316 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60392 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60515 00:31:56.031 Removing: /var/run/dpdk/spdk_pid60964 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61046 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61133 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61149 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61310 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61337 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61497 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61513 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61588 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61612 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61681 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61705 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61911 00:31:56.291 Removing: /var/run/dpdk/spdk_pid61952 00:31:56.291 Removing: /var/run/dpdk/spdk_pid62037 00:31:56.291 Removing: /var/run/dpdk/spdk_pid62232 00:31:56.291 Removing: /var/run/dpdk/spdk_pid62336 00:31:56.291 Removing: /var/run/dpdk/spdk_pid62379 00:31:56.291 Removing: /var/run/dpdk/spdk_pid62842 00:31:56.291 Removing: /var/run/dpdk/spdk_pid62945 00:31:56.291 Removing: /var/run/dpdk/spdk_pid63060 00:31:56.291 Removing: /var/run/dpdk/spdk_pid63113 00:31:56.291 Removing: /var/run/dpdk/spdk_pid63144 00:31:56.291 Removing: /var/run/dpdk/spdk_pid63228 00:31:56.291 Removing: /var/run/dpdk/spdk_pid63881 00:31:56.291 Removing: /var/run/dpdk/spdk_pid63929 00:31:56.291 Removing: /var/run/dpdk/spdk_pid64420 00:31:56.291 Removing: /var/run/dpdk/spdk_pid64530 00:31:56.291 Removing: /var/run/dpdk/spdk_pid64645 00:31:56.291 Removing: /var/run/dpdk/spdk_pid64703 00:31:56.291 Removing: /var/run/dpdk/spdk_pid64733 00:31:56.291 Removing: /var/run/dpdk/spdk_pid64760 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66664 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66812 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66816 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66839 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66878 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66882 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66894 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66944 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66948 00:31:56.291 Removing: /var/run/dpdk/spdk_pid66960 00:31:56.291 Removing: /var/run/dpdk/spdk_pid67005 00:31:56.291 Removing: /var/run/dpdk/spdk_pid67014 00:31:56.291 Removing: /var/run/dpdk/spdk_pid67026 00:31:56.291 Removing: /var/run/dpdk/spdk_pid68448 00:31:56.291 Removing: /var/run/dpdk/spdk_pid68569 00:31:56.291 Removing: /var/run/dpdk/spdk_pid70011 00:31:56.291 Removing: /var/run/dpdk/spdk_pid71759 00:31:56.291 Removing: /var/run/dpdk/spdk_pid71844 00:31:56.291 Removing: /var/run/dpdk/spdk_pid71925 00:31:56.291 Removing: /var/run/dpdk/spdk_pid72040 00:31:56.291 Removing: /var/run/dpdk/spdk_pid72139 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72240 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72326 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72407 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72522 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72614 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72721 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72807 00:31:56.551 Removing: /var/run/dpdk/spdk_pid72888 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73002 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73095 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73196 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73282 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73363 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73473 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73580 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73677 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73762 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73845 00:31:56.551 Removing: /var/run/dpdk/spdk_pid73925 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74005 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74115 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74212 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74312 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74396 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74478 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74558 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74632 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74741 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74843 00:31:56.551 Removing: /var/run/dpdk/spdk_pid74991 00:31:56.551 Removing: /var/run/dpdk/spdk_pid75292 00:31:56.551 Removing: /var/run/dpdk/spdk_pid75331 00:31:56.551 Removing: /var/run/dpdk/spdk_pid75793 00:31:56.551 Removing: /var/run/dpdk/spdk_pid75977 00:31:56.551 Removing: /var/run/dpdk/spdk_pid76078 00:31:56.552 Removing: /var/run/dpdk/spdk_pid76192 00:31:56.552 Removing: /var/run/dpdk/spdk_pid76247 00:31:56.552 Removing: /var/run/dpdk/spdk_pid76278 00:31:56.552 Removing: /var/run/dpdk/spdk_pid76585 00:31:56.552 Removing: /var/run/dpdk/spdk_pid76664 00:31:56.552 Removing: /var/run/dpdk/spdk_pid76755 00:31:56.552 Removing: /var/run/dpdk/spdk_pid77194 00:31:56.552 Removing: /var/run/dpdk/spdk_pid77346 00:31:56.552 Removing: /var/run/dpdk/spdk_pid78152 00:31:56.552 Removing: /var/run/dpdk/spdk_pid78301 00:31:56.552 Removing: /var/run/dpdk/spdk_pid78505 00:31:56.552 Removing: /var/run/dpdk/spdk_pid78612 00:31:56.552 Removing: /var/run/dpdk/spdk_pid78951 00:31:56.552 Removing: /var/run/dpdk/spdk_pid79217 00:31:56.552 Removing: /var/run/dpdk/spdk_pid79575 00:31:56.552 Removing: /var/run/dpdk/spdk_pid79786 00:31:56.552 Removing: /var/run/dpdk/spdk_pid79935 00:31:56.552 Removing: /var/run/dpdk/spdk_pid80005 00:31:56.811 Removing: /var/run/dpdk/spdk_pid80154 00:31:56.811 Removing: /var/run/dpdk/spdk_pid80190 00:31:56.811 Removing: /var/run/dpdk/spdk_pid80266 00:31:56.811 Removing: /var/run/dpdk/spdk_pid80476 00:31:56.811 Removing: /var/run/dpdk/spdk_pid80725 00:31:56.811 Removing: /var/run/dpdk/spdk_pid81166 00:31:56.811 Removing: /var/run/dpdk/spdk_pid81607 00:31:56.811 Removing: /var/run/dpdk/spdk_pid82060 00:31:56.811 Removing: /var/run/dpdk/spdk_pid82573 00:31:56.811 Removing: /var/run/dpdk/spdk_pid82716 00:31:56.811 Removing: /var/run/dpdk/spdk_pid82809 00:31:56.811 Removing: /var/run/dpdk/spdk_pid83444 00:31:56.811 Removing: /var/run/dpdk/spdk_pid83521 00:31:56.811 Removing: /var/run/dpdk/spdk_pid84013 00:31:56.811 Removing: /var/run/dpdk/spdk_pid84406 00:31:56.811 Removing: /var/run/dpdk/spdk_pid84939 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85072 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85125 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85189 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85246 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85303 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85510 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85602 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85665 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85743 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85778 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85856 00:31:56.811 Removing: /var/run/dpdk/spdk_pid85982 00:31:56.811 Clean 00:31:56.811 13:29:48 -- common/autotest_common.sh@1453 -- # return 0 00:31:56.811 13:29:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:56.811 13:29:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.811 13:29:48 -- common/autotest_common.sh@10 -- # set +x 00:31:57.071 13:29:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:57.071 13:29:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.071 13:29:48 -- common/autotest_common.sh@10 -- # set +x 00:31:57.071 13:29:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:57.071 13:29:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:57.071 13:29:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:57.071 13:29:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:57.071 13:29:48 -- spdk/autotest.sh@398 -- # hostname 00:31:57.071 13:29:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:57.331 geninfo: WARNING: invalid characters removed from testname! 00:32:23.890 13:30:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:25.795 13:30:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:27.703 13:30:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:30.239 13:30:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:32.149 13:30:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:34.731 13:30:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:36.643 13:30:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:36.643 13:30:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:36.643 13:30:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:36.643 13:30:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:36.643 13:30:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:36.643 13:30:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:36.643 + [[ -n 5247 ]] 00:32:36.643 + sudo kill 5247 00:32:36.653 [Pipeline] } 00:32:36.669 [Pipeline] // timeout 00:32:36.673 [Pipeline] } 00:32:36.686 [Pipeline] // stage 00:32:36.690 [Pipeline] } 00:32:36.702 [Pipeline] // catchError 00:32:36.710 [Pipeline] stage 00:32:36.713 [Pipeline] { (Stop VM) 00:32:36.724 [Pipeline] sh 00:32:37.008 + vagrant halt 00:32:40.298 ==> default: Halting domain... 00:32:46.884 [Pipeline] sh 00:32:47.167 + vagrant destroy -f 00:32:49.703 ==> default: Removing domain... 00:32:49.715 [Pipeline] sh 00:32:49.998 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:50.006 [Pipeline] } 00:32:50.017 [Pipeline] // stage 00:32:50.023 [Pipeline] } 00:32:50.036 [Pipeline] // dir 00:32:50.041 [Pipeline] } 00:32:50.055 [Pipeline] // wrap 00:32:50.061 [Pipeline] } 00:32:50.073 [Pipeline] // catchError 00:32:50.082 [Pipeline] stage 00:32:50.084 [Pipeline] { (Epilogue) 00:32:50.097 [Pipeline] sh 00:32:50.381 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:55.674 [Pipeline] catchError 00:32:55.676 [Pipeline] { 00:32:55.690 [Pipeline] sh 00:32:55.973 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:56.233 Artifacts sizes are good 00:32:56.242 [Pipeline] } 00:32:56.256 [Pipeline] // catchError 00:32:56.267 [Pipeline] archiveArtifacts 00:32:56.275 Archiving artifacts 00:32:56.416 [Pipeline] cleanWs 00:32:56.436 [WS-CLEANUP] Deleting project workspace... 00:32:56.436 [WS-CLEANUP] Deferred wipeout is used... 00:32:56.459 [WS-CLEANUP] done 00:32:56.461 [Pipeline] } 00:32:56.476 [Pipeline] // stage 00:32:56.482 [Pipeline] } 00:32:56.495 [Pipeline] // node 00:32:56.501 [Pipeline] End of Pipeline 00:32:56.543 Finished: SUCCESS